00:00:00.001 Started by upstream project "autotest-nightly-lts" build number 1996 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3257 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.129 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.129 The recommended git tool is: git 00:00:00.129 using credential 00000000-0000-0000-0000-000000000002 00:00:00.131 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.156 Fetching changes from the remote Git repository 00:00:00.158 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.191 Using shallow fetch with depth 1 00:00:00.191 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.191 > git --version # timeout=10 00:00:00.218 > git --version # 'git version 2.39.2' 00:00:00.218 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.245 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.245 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.036 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.047 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.059 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:06.059 > git config core.sparsecheckout # timeout=10 00:00:06.069 > git read-tree -mu HEAD # timeout=10 00:00:06.085 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:06.105 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:06.105 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:06.211 [Pipeline] Start of Pipeline 00:00:06.226 [Pipeline] library 00:00:06.227 Loading library shm_lib@master 00:00:06.227 Library shm_lib@master is cached. Copying from home. 00:00:06.245 [Pipeline] node 00:00:06.254 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.256 [Pipeline] { 00:00:06.267 [Pipeline] catchError 00:00:06.269 [Pipeline] { 00:00:06.285 [Pipeline] wrap 00:00:06.300 [Pipeline] { 00:00:06.308 [Pipeline] stage 00:00:06.310 [Pipeline] { (Prologue) 00:00:06.481 [Pipeline] sh 00:00:06.762 + logger -p user.info -t JENKINS-CI 00:00:06.780 [Pipeline] echo 00:00:06.781 Node: GP11 00:00:06.788 [Pipeline] sh 00:00:07.079 [Pipeline] setCustomBuildProperty 00:00:07.089 [Pipeline] echo 00:00:07.090 Cleanup processes 00:00:07.094 [Pipeline] sh 00:00:07.373 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.373 1917490 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.383 [Pipeline] sh 00:00:07.659 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.659 ++ grep -v 'sudo pgrep' 00:00:07.659 ++ awk '{print $1}' 00:00:07.659 + sudo kill -9 00:00:07.659 + true 00:00:07.673 [Pipeline] cleanWs 00:00:07.681 [WS-CLEANUP] Deleting project workspace... 00:00:07.681 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.688 [WS-CLEANUP] done 00:00:07.691 [Pipeline] setCustomBuildProperty 00:00:07.704 [Pipeline] sh 00:00:08.003 + sudo git config --global --replace-all safe.directory '*' 00:00:08.085 [Pipeline] httpRequest 00:00:08.124 [Pipeline] echo 00:00:08.127 Sorcerer 10.211.164.101 is alive 00:00:08.135 [Pipeline] httpRequest 00:00:08.140 HttpMethod: GET 00:00:08.141 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:08.142 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:08.150 Response Code: HTTP/1.1 200 OK 00:00:08.151 Success: Status code 200 is in the accepted range: 200,404 00:00:08.151 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:11.839 [Pipeline] sh 00:00:12.122 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:12.137 [Pipeline] httpRequest 00:00:12.157 [Pipeline] echo 00:00:12.159 Sorcerer 10.211.164.101 is alive 00:00:12.169 [Pipeline] httpRequest 00:00:12.174 HttpMethod: GET 00:00:12.174 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:12.175 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:12.198 Response Code: HTTP/1.1 200 OK 00:00:12.198 Success: Status code 200 is in the accepted range: 200,404 00:00:12.199 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:56.165 [Pipeline] sh 00:00:56.449 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:58.995 [Pipeline] sh 00:00:59.280 + git -C spdk log --oneline -n5 00:00:59.280 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:00:59.280 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:00:59.280 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:00:59.280 e03c164a1 nvme: add nvme_ctrlr_lock 00:00:59.280 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:00:59.293 [Pipeline] } 00:00:59.311 [Pipeline] // stage 00:00:59.321 [Pipeline] stage 00:00:59.324 [Pipeline] { (Prepare) 00:00:59.341 [Pipeline] writeFile 00:00:59.354 [Pipeline] sh 00:00:59.634 + logger -p user.info -t JENKINS-CI 00:00:59.647 [Pipeline] sh 00:00:59.931 + logger -p user.info -t JENKINS-CI 00:00:59.945 [Pipeline] sh 00:01:00.229 + cat autorun-spdk.conf 00:01:00.229 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.229 SPDK_TEST_NVMF=1 00:01:00.229 SPDK_TEST_NVME_CLI=1 00:01:00.229 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.229 SPDK_TEST_NVMF_NICS=e810 00:01:00.229 SPDK_RUN_UBSAN=1 00:01:00.229 NET_TYPE=phy 00:01:00.237 RUN_NIGHTLY=1 00:01:00.241 [Pipeline] readFile 00:01:00.271 [Pipeline] withEnv 00:01:00.273 [Pipeline] { 00:01:00.289 [Pipeline] sh 00:01:00.574 + set -ex 00:01:00.574 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:00.575 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:00.575 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:00.575 ++ SPDK_TEST_NVMF=1 00:01:00.575 ++ SPDK_TEST_NVME_CLI=1 00:01:00.575 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:00.575 ++ SPDK_TEST_NVMF_NICS=e810 00:01:00.575 ++ SPDK_RUN_UBSAN=1 00:01:00.575 ++ NET_TYPE=phy 00:01:00.575 ++ RUN_NIGHTLY=1 00:01:00.575 + case $SPDK_TEST_NVMF_NICS in 00:01:00.575 + DRIVERS=ice 00:01:00.575 + [[ tcp == \r\d\m\a ]] 00:01:00.575 + [[ -n ice ]] 00:01:00.575 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:00.575 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:04.768 rmmod: ERROR: Module irdma is not currently loaded 00:01:04.768 rmmod: ERROR: Module i40iw is not currently loaded 00:01:04.768 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:04.768 + true 00:01:04.768 + for D in $DRIVERS 00:01:04.768 + sudo modprobe ice 00:01:04.768 + exit 0 00:01:04.778 [Pipeline] } 00:01:04.795 [Pipeline] // withEnv 00:01:04.800 [Pipeline] } 00:01:04.812 [Pipeline] // stage 00:01:04.822 [Pipeline] catchError 00:01:04.824 [Pipeline] { 00:01:04.839 [Pipeline] timeout 00:01:04.840 Timeout set to expire in 50 min 00:01:04.841 [Pipeline] { 00:01:04.858 [Pipeline] stage 00:01:04.860 [Pipeline] { (Tests) 00:01:04.880 [Pipeline] sh 00:01:05.164 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.164 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.164 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.164 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:05.164 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:05.164 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:05.164 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:05.164 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:05.164 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:05.164 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:05.164 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:05.164 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:05.164 + source /etc/os-release 00:01:05.164 ++ NAME='Fedora Linux' 00:01:05.164 ++ VERSION='38 (Cloud Edition)' 00:01:05.164 ++ ID=fedora 00:01:05.164 ++ VERSION_ID=38 00:01:05.164 ++ VERSION_CODENAME= 00:01:05.164 ++ PLATFORM_ID=platform:f38 00:01:05.164 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:05.164 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:05.164 ++ LOGO=fedora-logo-icon 00:01:05.164 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:05.164 ++ HOME_URL=https://fedoraproject.org/ 00:01:05.164 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:05.164 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:05.164 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:05.164 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:05.164 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:05.164 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:05.164 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:05.164 ++ SUPPORT_END=2024-05-14 00:01:05.164 ++ VARIANT='Cloud Edition' 00:01:05.164 ++ VARIANT_ID=cloud 00:01:05.164 + uname -a 00:01:05.164 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:05.164 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:06.100 Hugepages 00:01:06.100 node hugesize free / total 00:01:06.100 node0 1048576kB 0 / 0 00:01:06.100 node0 2048kB 0 / 0 00:01:06.100 node1 1048576kB 0 / 0 00:01:06.100 node1 2048kB 0 / 0 00:01:06.100 00:01:06.100 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:06.100 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:06.100 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:06.100 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:06.100 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:06.100 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:06.100 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:06.100 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:06.100 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:06.100 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:06.100 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:06.100 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:06.100 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:06.100 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:06.100 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:06.100 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:06.100 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:06.100 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:06.100 + rm -f /tmp/spdk-ld-path 00:01:06.100 + source autorun-spdk.conf 00:01:06.100 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.100 ++ SPDK_TEST_NVMF=1 00:01:06.100 ++ SPDK_TEST_NVME_CLI=1 00:01:06.100 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.100 ++ SPDK_TEST_NVMF_NICS=e810 00:01:06.100 ++ SPDK_RUN_UBSAN=1 00:01:06.100 ++ NET_TYPE=phy 00:01:06.100 ++ RUN_NIGHTLY=1 00:01:06.100 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:06.100 + [[ -n '' ]] 00:01:06.100 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:06.100 + for M in /var/spdk/build-*-manifest.txt 00:01:06.100 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:06.100 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:06.100 + for M in /var/spdk/build-*-manifest.txt 00:01:06.100 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:06.100 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:06.100 ++ uname 00:01:06.100 + [[ Linux == \L\i\n\u\x ]] 00:01:06.100 + sudo dmesg -T 00:01:06.100 + sudo dmesg --clear 00:01:06.100 + dmesg_pid=1918794 00:01:06.100 + [[ Fedora Linux == FreeBSD ]] 00:01:06.100 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:06.100 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:06.100 + sudo dmesg -Tw 00:01:06.100 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:06.100 + [[ -x /usr/src/fio-static/fio ]] 00:01:06.100 + export FIO_BIN=/usr/src/fio-static/fio 00:01:06.100 + FIO_BIN=/usr/src/fio-static/fio 00:01:06.100 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:06.100 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:06.100 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:06.100 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:06.100 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:06.100 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:06.100 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:06.100 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:06.100 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:06.360 Test configuration: 00:01:06.360 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.360 SPDK_TEST_NVMF=1 00:01:06.360 SPDK_TEST_NVME_CLI=1 00:01:06.360 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.360 SPDK_TEST_NVMF_NICS=e810 00:01:06.360 SPDK_RUN_UBSAN=1 00:01:06.360 NET_TYPE=phy 00:01:06.360 RUN_NIGHTLY=1 15:24:45 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:06.360 15:24:45 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:06.360 15:24:45 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:06.360 15:24:45 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:06.360 15:24:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.360 15:24:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.360 15:24:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.360 15:24:45 -- paths/export.sh@5 -- $ export PATH 00:01:06.360 15:24:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:06.360 15:24:45 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:06.360 15:24:45 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:06.360 15:24:45 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720617885.XXXXXX 00:01:06.360 15:24:45 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720617885.bRjrid 00:01:06.360 15:24:45 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:06.360 15:24:45 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:01:06.360 15:24:45 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:06.360 15:24:45 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:06.360 15:24:45 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:06.360 15:24:45 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:06.360 15:24:45 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:06.360 15:24:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.361 15:24:45 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:01:06.361 15:24:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:06.361 15:24:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:06.361 15:24:45 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:06.361 15:24:45 -- spdk/autobuild.sh@16 -- $ date -u 00:01:06.361 Wed Jul 10 01:24:45 PM UTC 2024 00:01:06.361 15:24:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:06.361 LTS-59-g4b94202c6 00:01:06.361 15:24:45 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:06.361 15:24:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:06.361 15:24:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:06.361 15:24:45 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:06.361 15:24:45 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:06.361 15:24:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.361 ************************************ 00:01:06.361 START TEST ubsan 00:01:06.361 ************************************ 00:01:06.361 15:24:45 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:06.361 using ubsan 00:01:06.361 00:01:06.361 real 0m0.000s 00:01:06.361 user 0m0.000s 00:01:06.361 sys 0m0.000s 00:01:06.361 15:24:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:06.361 15:24:45 -- common/autotest_common.sh@10 -- $ set +x 00:01:06.361 ************************************ 00:01:06.361 END TEST ubsan 00:01:06.361 ************************************ 00:01:06.361 15:24:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:06.361 15:24:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:06.361 15:24:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:06.361 15:24:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:06.361 15:24:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:06.361 15:24:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:06.361 15:24:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:06.361 15:24:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:06.361 15:24:45 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-shared 00:01:06.361 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:06.361 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:06.620 Using 'verbs' RDMA provider 00:01:17.227 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:27.220 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:27.220 Creating mk/config.mk...done. 00:01:27.220 Creating mk/cc.flags.mk...done. 00:01:27.220 Type 'make' to build. 00:01:27.220 15:25:06 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:27.220 15:25:06 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:27.220 15:25:06 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:27.220 15:25:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:27.220 ************************************ 00:01:27.220 START TEST make 00:01:27.220 ************************************ 00:01:27.220 15:25:06 -- common/autotest_common.sh@1104 -- $ make -j48 00:01:27.220 make[1]: Nothing to be done for 'all'. 00:01:35.363 The Meson build system 00:01:35.363 Version: 1.3.1 00:01:35.363 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:35.363 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:35.363 Build type: native build 00:01:35.363 Program cat found: YES (/usr/bin/cat) 00:01:35.363 Project name: DPDK 00:01:35.363 Project version: 23.11.0 00:01:35.363 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:35.363 C linker for the host machine: cc ld.bfd 2.39-16 00:01:35.363 Host machine cpu family: x86_64 00:01:35.363 Host machine cpu: x86_64 00:01:35.363 Message: ## Building in Developer Mode ## 00:01:35.363 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:35.363 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:35.363 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:35.363 Program python3 found: YES (/usr/bin/python3) 00:01:35.363 Program cat found: YES (/usr/bin/cat) 00:01:35.363 Compiler for C supports arguments -march=native: YES 00:01:35.363 Checking for size of "void *" : 8 00:01:35.363 Checking for size of "void *" : 8 (cached) 00:01:35.363 Library m found: YES 00:01:35.363 Library numa found: YES 00:01:35.363 Has header "numaif.h" : YES 00:01:35.363 Library fdt found: NO 00:01:35.363 Library execinfo found: NO 00:01:35.363 Has header "execinfo.h" : YES 00:01:35.363 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:35.363 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:35.363 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:35.363 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:35.363 Run-time dependency openssl found: YES 3.0.9 00:01:35.363 Run-time dependency libpcap found: YES 1.10.4 00:01:35.363 Has header "pcap.h" with dependency libpcap: YES 00:01:35.363 Compiler for C supports arguments -Wcast-qual: YES 00:01:35.363 Compiler for C supports arguments -Wdeprecated: YES 00:01:35.363 Compiler for C supports arguments -Wformat: YES 00:01:35.363 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:35.363 Compiler for C supports arguments -Wformat-security: NO 00:01:35.363 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:35.363 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:35.363 Compiler for C supports arguments -Wnested-externs: YES 00:01:35.363 Compiler for C supports arguments -Wold-style-definition: YES 00:01:35.363 Compiler for C supports arguments -Wpointer-arith: YES 00:01:35.363 Compiler for C supports arguments -Wsign-compare: YES 00:01:35.363 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:35.363 Compiler for C supports arguments -Wundef: YES 00:01:35.363 Compiler for C supports arguments -Wwrite-strings: YES 00:01:35.363 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:35.363 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:35.363 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:35.363 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:35.363 Program objdump found: YES (/usr/bin/objdump) 00:01:35.363 Compiler for C supports arguments -mavx512f: YES 00:01:35.363 Checking if "AVX512 checking" compiles: YES 00:01:35.363 Fetching value of define "__SSE4_2__" : 1 00:01:35.363 Fetching value of define "__AES__" : 1 00:01:35.363 Fetching value of define "__AVX__" : 1 00:01:35.363 Fetching value of define "__AVX2__" : (undefined) 00:01:35.363 Fetching value of define "__AVX512BW__" : (undefined) 00:01:35.363 Fetching value of define "__AVX512CD__" : (undefined) 00:01:35.363 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:35.363 Fetching value of define "__AVX512F__" : (undefined) 00:01:35.363 Fetching value of define "__AVX512VL__" : (undefined) 00:01:35.363 Fetching value of define "__PCLMUL__" : 1 00:01:35.363 Fetching value of define "__RDRND__" : 1 00:01:35.363 Fetching value of define "__RDSEED__" : (undefined) 00:01:35.363 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:35.363 Fetching value of define "__znver1__" : (undefined) 00:01:35.363 Fetching value of define "__znver2__" : (undefined) 00:01:35.363 Fetching value of define "__znver3__" : (undefined) 00:01:35.363 Fetching value of define "__znver4__" : (undefined) 00:01:35.363 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:35.363 Message: lib/log: Defining dependency "log" 00:01:35.363 Message: lib/kvargs: Defining dependency "kvargs" 00:01:35.363 Message: lib/telemetry: Defining dependency "telemetry" 00:01:35.363 Checking for function "getentropy" : NO 00:01:35.363 Message: lib/eal: Defining dependency "eal" 00:01:35.363 Message: lib/ring: Defining dependency "ring" 00:01:35.363 Message: lib/rcu: Defining dependency "rcu" 00:01:35.363 Message: lib/mempool: Defining dependency "mempool" 00:01:35.363 Message: lib/mbuf: Defining dependency "mbuf" 00:01:35.363 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:35.363 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:35.364 Compiler for C supports arguments -mpclmul: YES 00:01:35.364 Compiler for C supports arguments -maes: YES 00:01:35.364 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:35.364 Compiler for C supports arguments -mavx512bw: YES 00:01:35.364 Compiler for C supports arguments -mavx512dq: YES 00:01:35.364 Compiler for C supports arguments -mavx512vl: YES 00:01:35.364 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:35.364 Compiler for C supports arguments -mavx2: YES 00:01:35.364 Compiler for C supports arguments -mavx: YES 00:01:35.364 Message: lib/net: Defining dependency "net" 00:01:35.364 Message: lib/meter: Defining dependency "meter" 00:01:35.364 Message: lib/ethdev: Defining dependency "ethdev" 00:01:35.364 Message: lib/pci: Defining dependency "pci" 00:01:35.364 Message: lib/cmdline: Defining dependency "cmdline" 00:01:35.364 Message: lib/hash: Defining dependency "hash" 00:01:35.364 Message: lib/timer: Defining dependency "timer" 00:01:35.364 Message: lib/compressdev: Defining dependency "compressdev" 00:01:35.364 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:35.364 Message: lib/dmadev: Defining dependency "dmadev" 00:01:35.364 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:35.364 Message: lib/power: Defining dependency "power" 00:01:35.364 Message: lib/reorder: Defining dependency "reorder" 00:01:35.364 Message: lib/security: Defining dependency "security" 00:01:35.364 Has header "linux/userfaultfd.h" : YES 00:01:35.364 Has header "linux/vduse.h" : YES 00:01:35.364 Message: lib/vhost: Defining dependency "vhost" 00:01:35.364 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:35.364 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:35.364 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:35.364 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:35.364 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:35.364 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:35.364 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:35.364 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:35.364 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:35.364 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:35.364 Program doxygen found: YES (/usr/bin/doxygen) 00:01:35.364 Configuring doxy-api-html.conf using configuration 00:01:35.364 Configuring doxy-api-man.conf using configuration 00:01:35.364 Program mandb found: YES (/usr/bin/mandb) 00:01:35.364 Program sphinx-build found: NO 00:01:35.364 Configuring rte_build_config.h using configuration 00:01:35.364 Message: 00:01:35.364 ================= 00:01:35.364 Applications Enabled 00:01:35.364 ================= 00:01:35.364 00:01:35.364 apps: 00:01:35.364 00:01:35.364 00:01:35.364 Message: 00:01:35.364 ================= 00:01:35.364 Libraries Enabled 00:01:35.364 ================= 00:01:35.364 00:01:35.364 libs: 00:01:35.364 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:35.364 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:35.364 cryptodev, dmadev, power, reorder, security, vhost, 00:01:35.364 00:01:35.364 Message: 00:01:35.364 =============== 00:01:35.364 Drivers Enabled 00:01:35.364 =============== 00:01:35.364 00:01:35.364 common: 00:01:35.364 00:01:35.364 bus: 00:01:35.364 pci, vdev, 00:01:35.364 mempool: 00:01:35.364 ring, 00:01:35.364 dma: 00:01:35.364 00:01:35.364 net: 00:01:35.364 00:01:35.364 crypto: 00:01:35.364 00:01:35.364 compress: 00:01:35.364 00:01:35.364 vdpa: 00:01:35.364 00:01:35.364 00:01:35.364 Message: 00:01:35.364 ================= 00:01:35.364 Content Skipped 00:01:35.364 ================= 00:01:35.364 00:01:35.364 apps: 00:01:35.364 dumpcap: explicitly disabled via build config 00:01:35.364 graph: explicitly disabled via build config 00:01:35.364 pdump: explicitly disabled via build config 00:01:35.364 proc-info: explicitly disabled via build config 00:01:35.364 test-acl: explicitly disabled via build config 00:01:35.364 test-bbdev: explicitly disabled via build config 00:01:35.364 test-cmdline: explicitly disabled via build config 00:01:35.364 test-compress-perf: explicitly disabled via build config 00:01:35.364 test-crypto-perf: explicitly disabled via build config 00:01:35.364 test-dma-perf: explicitly disabled via build config 00:01:35.364 test-eventdev: explicitly disabled via build config 00:01:35.364 test-fib: explicitly disabled via build config 00:01:35.364 test-flow-perf: explicitly disabled via build config 00:01:35.364 test-gpudev: explicitly disabled via build config 00:01:35.364 test-mldev: explicitly disabled via build config 00:01:35.364 test-pipeline: explicitly disabled via build config 00:01:35.364 test-pmd: explicitly disabled via build config 00:01:35.364 test-regex: explicitly disabled via build config 00:01:35.364 test-sad: explicitly disabled via build config 00:01:35.364 test-security-perf: explicitly disabled via build config 00:01:35.364 00:01:35.364 libs: 00:01:35.364 metrics: explicitly disabled via build config 00:01:35.364 acl: explicitly disabled via build config 00:01:35.364 bbdev: explicitly disabled via build config 00:01:35.364 bitratestats: explicitly disabled via build config 00:01:35.364 bpf: explicitly disabled via build config 00:01:35.364 cfgfile: explicitly disabled via build config 00:01:35.364 distributor: explicitly disabled via build config 00:01:35.364 efd: explicitly disabled via build config 00:01:35.364 eventdev: explicitly disabled via build config 00:01:35.364 dispatcher: explicitly disabled via build config 00:01:35.364 gpudev: explicitly disabled via build config 00:01:35.364 gro: explicitly disabled via build config 00:01:35.364 gso: explicitly disabled via build config 00:01:35.364 ip_frag: explicitly disabled via build config 00:01:35.364 jobstats: explicitly disabled via build config 00:01:35.364 latencystats: explicitly disabled via build config 00:01:35.364 lpm: explicitly disabled via build config 00:01:35.364 member: explicitly disabled via build config 00:01:35.364 pcapng: explicitly disabled via build config 00:01:35.364 rawdev: explicitly disabled via build config 00:01:35.364 regexdev: explicitly disabled via build config 00:01:35.364 mldev: explicitly disabled via build config 00:01:35.364 rib: explicitly disabled via build config 00:01:35.364 sched: explicitly disabled via build config 00:01:35.364 stack: explicitly disabled via build config 00:01:35.364 ipsec: explicitly disabled via build config 00:01:35.364 pdcp: explicitly disabled via build config 00:01:35.364 fib: explicitly disabled via build config 00:01:35.364 port: explicitly disabled via build config 00:01:35.364 pdump: explicitly disabled via build config 00:01:35.364 table: explicitly disabled via build config 00:01:35.364 pipeline: explicitly disabled via build config 00:01:35.364 graph: explicitly disabled via build config 00:01:35.364 node: explicitly disabled via build config 00:01:35.364 00:01:35.364 drivers: 00:01:35.364 common/cpt: not in enabled drivers build config 00:01:35.364 common/dpaax: not in enabled drivers build config 00:01:35.364 common/iavf: not in enabled drivers build config 00:01:35.364 common/idpf: not in enabled drivers build config 00:01:35.364 common/mvep: not in enabled drivers build config 00:01:35.364 common/octeontx: not in enabled drivers build config 00:01:35.364 bus/auxiliary: not in enabled drivers build config 00:01:35.364 bus/cdx: not in enabled drivers build config 00:01:35.364 bus/dpaa: not in enabled drivers build config 00:01:35.364 bus/fslmc: not in enabled drivers build config 00:01:35.364 bus/ifpga: not in enabled drivers build config 00:01:35.364 bus/platform: not in enabled drivers build config 00:01:35.364 bus/vmbus: not in enabled drivers build config 00:01:35.364 common/cnxk: not in enabled drivers build config 00:01:35.364 common/mlx5: not in enabled drivers build config 00:01:35.364 common/nfp: not in enabled drivers build config 00:01:35.365 common/qat: not in enabled drivers build config 00:01:35.365 common/sfc_efx: not in enabled drivers build config 00:01:35.365 mempool/bucket: not in enabled drivers build config 00:01:35.365 mempool/cnxk: not in enabled drivers build config 00:01:35.365 mempool/dpaa: not in enabled drivers build config 00:01:35.365 mempool/dpaa2: not in enabled drivers build config 00:01:35.365 mempool/octeontx: not in enabled drivers build config 00:01:35.365 mempool/stack: not in enabled drivers build config 00:01:35.365 dma/cnxk: not in enabled drivers build config 00:01:35.365 dma/dpaa: not in enabled drivers build config 00:01:35.365 dma/dpaa2: not in enabled drivers build config 00:01:35.365 dma/hisilicon: not in enabled drivers build config 00:01:35.365 dma/idxd: not in enabled drivers build config 00:01:35.365 dma/ioat: not in enabled drivers build config 00:01:35.365 dma/skeleton: not in enabled drivers build config 00:01:35.365 net/af_packet: not in enabled drivers build config 00:01:35.365 net/af_xdp: not in enabled drivers build config 00:01:35.365 net/ark: not in enabled drivers build config 00:01:35.365 net/atlantic: not in enabled drivers build config 00:01:35.365 net/avp: not in enabled drivers build config 00:01:35.365 net/axgbe: not in enabled drivers build config 00:01:35.365 net/bnx2x: not in enabled drivers build config 00:01:35.365 net/bnxt: not in enabled drivers build config 00:01:35.365 net/bonding: not in enabled drivers build config 00:01:35.365 net/cnxk: not in enabled drivers build config 00:01:35.365 net/cpfl: not in enabled drivers build config 00:01:35.365 net/cxgbe: not in enabled drivers build config 00:01:35.365 net/dpaa: not in enabled drivers build config 00:01:35.365 net/dpaa2: not in enabled drivers build config 00:01:35.365 net/e1000: not in enabled drivers build config 00:01:35.365 net/ena: not in enabled drivers build config 00:01:35.365 net/enetc: not in enabled drivers build config 00:01:35.365 net/enetfec: not in enabled drivers build config 00:01:35.365 net/enic: not in enabled drivers build config 00:01:35.365 net/failsafe: not in enabled drivers build config 00:01:35.365 net/fm10k: not in enabled drivers build config 00:01:35.365 net/gve: not in enabled drivers build config 00:01:35.365 net/hinic: not in enabled drivers build config 00:01:35.365 net/hns3: not in enabled drivers build config 00:01:35.365 net/i40e: not in enabled drivers build config 00:01:35.365 net/iavf: not in enabled drivers build config 00:01:35.365 net/ice: not in enabled drivers build config 00:01:35.365 net/idpf: not in enabled drivers build config 00:01:35.365 net/igc: not in enabled drivers build config 00:01:35.365 net/ionic: not in enabled drivers build config 00:01:35.365 net/ipn3ke: not in enabled drivers build config 00:01:35.365 net/ixgbe: not in enabled drivers build config 00:01:35.365 net/mana: not in enabled drivers build config 00:01:35.365 net/memif: not in enabled drivers build config 00:01:35.365 net/mlx4: not in enabled drivers build config 00:01:35.365 net/mlx5: not in enabled drivers build config 00:01:35.365 net/mvneta: not in enabled drivers build config 00:01:35.365 net/mvpp2: not in enabled drivers build config 00:01:35.365 net/netvsc: not in enabled drivers build config 00:01:35.365 net/nfb: not in enabled drivers build config 00:01:35.365 net/nfp: not in enabled drivers build config 00:01:35.365 net/ngbe: not in enabled drivers build config 00:01:35.365 net/null: not in enabled drivers build config 00:01:35.365 net/octeontx: not in enabled drivers build config 00:01:35.365 net/octeon_ep: not in enabled drivers build config 00:01:35.365 net/pcap: not in enabled drivers build config 00:01:35.365 net/pfe: not in enabled drivers build config 00:01:35.365 net/qede: not in enabled drivers build config 00:01:35.365 net/ring: not in enabled drivers build config 00:01:35.365 net/sfc: not in enabled drivers build config 00:01:35.365 net/softnic: not in enabled drivers build config 00:01:35.365 net/tap: not in enabled drivers build config 00:01:35.365 net/thunderx: not in enabled drivers build config 00:01:35.365 net/txgbe: not in enabled drivers build config 00:01:35.365 net/vdev_netvsc: not in enabled drivers build config 00:01:35.365 net/vhost: not in enabled drivers build config 00:01:35.365 net/virtio: not in enabled drivers build config 00:01:35.365 net/vmxnet3: not in enabled drivers build config 00:01:35.365 raw/*: missing internal dependency, "rawdev" 00:01:35.365 crypto/armv8: not in enabled drivers build config 00:01:35.365 crypto/bcmfs: not in enabled drivers build config 00:01:35.365 crypto/caam_jr: not in enabled drivers build config 00:01:35.365 crypto/ccp: not in enabled drivers build config 00:01:35.365 crypto/cnxk: not in enabled drivers build config 00:01:35.365 crypto/dpaa_sec: not in enabled drivers build config 00:01:35.365 crypto/dpaa2_sec: not in enabled drivers build config 00:01:35.365 crypto/ipsec_mb: not in enabled drivers build config 00:01:35.365 crypto/mlx5: not in enabled drivers build config 00:01:35.365 crypto/mvsam: not in enabled drivers build config 00:01:35.365 crypto/nitrox: not in enabled drivers build config 00:01:35.365 crypto/null: not in enabled drivers build config 00:01:35.365 crypto/octeontx: not in enabled drivers build config 00:01:35.365 crypto/openssl: not in enabled drivers build config 00:01:35.365 crypto/scheduler: not in enabled drivers build config 00:01:35.365 crypto/uadk: not in enabled drivers build config 00:01:35.365 crypto/virtio: not in enabled drivers build config 00:01:35.365 compress/isal: not in enabled drivers build config 00:01:35.365 compress/mlx5: not in enabled drivers build config 00:01:35.365 compress/octeontx: not in enabled drivers build config 00:01:35.365 compress/zlib: not in enabled drivers build config 00:01:35.365 regex/*: missing internal dependency, "regexdev" 00:01:35.365 ml/*: missing internal dependency, "mldev" 00:01:35.365 vdpa/ifc: not in enabled drivers build config 00:01:35.365 vdpa/mlx5: not in enabled drivers build config 00:01:35.365 vdpa/nfp: not in enabled drivers build config 00:01:35.365 vdpa/sfc: not in enabled drivers build config 00:01:35.365 event/*: missing internal dependency, "eventdev" 00:01:35.365 baseband/*: missing internal dependency, "bbdev" 00:01:35.365 gpu/*: missing internal dependency, "gpudev" 00:01:35.365 00:01:35.365 00:01:35.623 Build targets in project: 85 00:01:35.623 00:01:35.623 DPDK 23.11.0 00:01:35.623 00:01:35.623 User defined options 00:01:35.623 buildtype : debug 00:01:35.623 default_library : shared 00:01:35.623 libdir : lib 00:01:35.623 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:35.623 c_args : -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:35.623 c_link_args : 00:01:35.623 cpu_instruction_set: native 00:01:35.623 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:35.623 disable_libs : bbdev,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:35.623 enable_docs : false 00:01:35.623 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:35.623 enable_kmods : false 00:01:35.623 tests : false 00:01:35.623 00:01:35.623 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:36.200 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:36.200 [1/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:36.200 [2/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:36.200 [3/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:36.200 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:36.200 [5/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:36.200 [6/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:36.200 [7/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:36.200 [8/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:36.200 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:36.200 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:36.200 [11/265] Linking static target lib/librte_kvargs.a 00:01:36.200 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:36.200 [13/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:36.200 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:36.200 [15/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:36.200 [16/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:36.200 [17/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:36.200 [18/265] Linking static target lib/librte_log.a 00:01:36.200 [19/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:36.462 [20/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:36.462 [21/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:36.721 [22/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.986 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:36.986 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:36.986 [25/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:36.986 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:36.986 [27/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:36.986 [28/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:36.986 [29/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:36.986 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:36.986 [31/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:36.986 [32/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:36.986 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:36.986 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:36.986 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:36.986 [36/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:36.986 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:36.986 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:36.986 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:36.986 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:36.986 [41/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:36.986 [42/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:36.986 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:36.986 [44/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:36.986 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:36.986 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:36.986 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:36.986 [48/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:36.986 [49/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:36.986 [50/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:37.248 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:37.248 [52/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:37.248 [53/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:37.248 [54/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:37.248 [55/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:37.248 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:37.248 [57/265] Linking static target lib/librte_telemetry.a 00:01:37.248 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:37.248 [59/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:37.248 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:37.248 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:37.248 [62/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:37.248 [63/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:37.248 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:37.248 [65/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:37.248 [66/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:37.512 [67/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:37.512 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:37.512 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:37.512 [70/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:37.512 [71/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.512 [72/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:37.512 [73/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:37.512 [74/265] Linking static target lib/librte_pci.a 00:01:37.512 [75/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:37.512 [76/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:37.512 [77/265] Linking target lib/librte_log.so.24.0 00:01:37.512 [78/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:37.512 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:37.512 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:37.512 [81/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:37.776 [82/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:37.776 [83/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:37.776 [84/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:37.776 [85/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:37.776 [86/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:37.776 [87/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:37.776 [88/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:37.776 [89/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:37.776 [90/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:38.036 [91/265] Linking target lib/librte_kvargs.so.24.0 00:01:38.036 [92/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:38.036 [93/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.036 [94/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:38.036 [95/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:38.036 [96/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:38.036 [97/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:38.036 [98/265] Linking static target lib/librte_ring.a 00:01:38.036 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:38.036 [100/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:38.036 [101/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:38.036 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:38.036 [103/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:38.036 [104/265] Linking static target lib/librte_meter.a 00:01:38.036 [105/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.036 [106/265] Linking static target lib/librte_eal.a 00:01:38.036 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:38.036 [108/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:38.036 [109/265] Linking target lib/librte_telemetry.so.24.0 00:01:38.301 [110/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:38.301 [111/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:38.301 [112/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:38.301 [113/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:38.301 [114/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:38.301 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:38.301 [116/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:38.301 [117/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:38.301 [118/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:38.301 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:38.301 [120/265] Linking static target lib/librte_mempool.a 00:01:38.301 [121/265] Linking static target lib/librte_rcu.a 00:01:38.301 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:38.301 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:38.301 [124/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:38.301 [125/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:38.301 [126/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:38.562 [127/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:38.562 [128/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:38.562 [129/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:38.562 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:38.562 [131/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:38.562 [132/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:38.562 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:38.562 [134/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:38.562 [135/265] Linking static target lib/librte_cmdline.a 00:01:38.562 [136/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.562 [137/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:38.562 [138/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:38.562 [139/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:38.562 [140/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:38.562 [141/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:38.562 [142/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.562 [143/265] Linking static target lib/librte_net.a 00:01:38.824 [144/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:38.824 [145/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:38.824 [146/265] Linking static target lib/librte_timer.a 00:01:38.824 [147/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:38.824 [148/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:38.824 [149/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:38.824 [150/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:38.824 [151/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:38.824 [152/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:39.083 [153/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:39.083 [154/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:39.083 [155/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.083 [156/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:39.083 [157/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:39.083 [158/265] Linking static target lib/librte_dmadev.a 00:01:39.083 [159/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:39.083 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:39.345 [161/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:39.345 [162/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:39.345 [163/265] Linking static target lib/librte_hash.a 00:01:39.345 [164/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:39.345 [165/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.345 [166/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.345 [167/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:39.345 [168/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:39.345 [169/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:39.345 [170/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:39.345 [171/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:39.345 [172/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:39.345 [173/265] Linking static target lib/librte_compressdev.a 00:01:39.345 [174/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:39.345 [175/265] Linking static target lib/librte_power.a 00:01:39.345 [176/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:39.345 [177/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:39.345 [178/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:39.603 [179/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:39.604 [180/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:39.604 [181/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:39.604 [182/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:39.604 [183/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:39.604 [184/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.604 [185/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:39.604 [186/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.604 [187/265] Linking static target lib/librte_mbuf.a 00:01:39.604 [188/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:39.604 [189/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:39.604 [190/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:39.604 [191/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:39.604 [192/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:39.604 [193/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:39.604 [194/265] Linking static target drivers/librte_bus_vdev.a 00:01:39.862 [195/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:39.862 [196/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:39.862 [197/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.862 [198/265] Linking static target lib/librte_reorder.a 00:01:39.862 [199/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:39.862 [200/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.862 [201/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:39.862 [202/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:39.862 [203/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:39.862 [204/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.862 [205/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:39.862 [206/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:39.862 [207/265] Linking static target drivers/librte_bus_pci.a 00:01:39.862 [208/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:39.862 [209/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:39.862 [210/265] Linking static target lib/librte_security.a 00:01:39.862 [211/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:39.862 [212/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.862 [213/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:39.862 [214/265] Linking static target drivers/librte_mempool_ring.a 00:01:40.120 [215/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.120 [216/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:40.120 [217/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.120 [218/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:40.120 [219/265] Linking static target lib/librte_ethdev.a 00:01:40.378 [220/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:40.378 [221/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.378 [222/265] Linking static target lib/librte_cryptodev.a 00:01:40.378 [223/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:41.311 [224/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.244 [225/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:44.145 [226/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.404 [227/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.404 [228/265] Linking target lib/librte_eal.so.24.0 00:01:44.662 [229/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:44.662 [230/265] Linking target lib/librte_pci.so.24.0 00:01:44.662 [231/265] Linking target lib/librte_meter.so.24.0 00:01:44.662 [232/265] Linking target lib/librte_dmadev.so.24.0 00:01:44.662 [233/265] Linking target lib/librte_ring.so.24.0 00:01:44.662 [234/265] Linking target lib/librte_timer.so.24.0 00:01:44.662 [235/265] Linking target drivers/librte_bus_vdev.so.24.0 00:01:44.662 [236/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:44.662 [237/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:44.662 [238/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:44.662 [239/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:44.662 [240/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:44.662 [241/265] Linking target lib/librte_mempool.so.24.0 00:01:44.662 [242/265] Linking target lib/librte_rcu.so.24.0 00:01:44.662 [243/265] Linking target drivers/librte_bus_pci.so.24.0 00:01:44.920 [244/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:44.920 [245/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:44.920 [246/265] Linking target drivers/librte_mempool_ring.so.24.0 00:01:44.920 [247/265] Linking target lib/librte_mbuf.so.24.0 00:01:44.920 [248/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:45.178 [249/265] Linking target lib/librte_reorder.so.24.0 00:01:45.178 [250/265] Linking target lib/librte_compressdev.so.24.0 00:01:45.178 [251/265] Linking target lib/librte_net.so.24.0 00:01:45.178 [252/265] Linking target lib/librte_cryptodev.so.24.0 00:01:45.178 [253/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:45.178 [254/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:45.178 [255/265] Linking target lib/librte_hash.so.24.0 00:01:45.178 [256/265] Linking target lib/librte_cmdline.so.24.0 00:01:45.178 [257/265] Linking target lib/librte_security.so.24.0 00:01:45.178 [258/265] Linking target lib/librte_ethdev.so.24.0 00:01:45.438 [259/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:45.438 [260/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:45.438 [261/265] Linking target lib/librte_power.so.24.0 00:01:47.992 [262/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:47.992 [263/265] Linking static target lib/librte_vhost.a 00:01:48.559 [264/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.818 [265/265] Linking target lib/librte_vhost.so.24.0 00:01:48.818 INFO: autodetecting backend as ninja 00:01:48.818 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:49.752 CC lib/log/log.o 00:01:49.752 CC lib/log/log_flags.o 00:01:49.752 CC lib/log/log_deprecated.o 00:01:49.752 CC lib/ut/ut.o 00:01:49.752 CC lib/ut_mock/mock.o 00:01:49.752 LIB libspdk_ut_mock.a 00:01:49.752 SO libspdk_ut_mock.so.5.0 00:01:49.752 LIB libspdk_log.a 00:01:49.752 LIB libspdk_ut.a 00:01:49.752 SO libspdk_ut.so.1.0 00:01:49.752 SO libspdk_log.so.6.1 00:01:49.752 SYMLINK libspdk_ut_mock.so 00:01:49.752 SYMLINK libspdk_ut.so 00:01:49.752 SYMLINK libspdk_log.so 00:01:50.010 CC lib/ioat/ioat.o 00:01:50.010 CC lib/util/base64.o 00:01:50.011 CC lib/util/bit_array.o 00:01:50.011 CXX lib/trace_parser/trace.o 00:01:50.011 CC lib/dma/dma.o 00:01:50.011 CC lib/util/cpuset.o 00:01:50.011 CC lib/util/crc16.o 00:01:50.011 CC lib/util/crc32.o 00:01:50.011 CC lib/util/crc32c.o 00:01:50.011 CC lib/util/crc32_ieee.o 00:01:50.011 CC lib/util/crc64.o 00:01:50.011 CC lib/util/dif.o 00:01:50.011 CC lib/util/fd.o 00:01:50.011 CC lib/util/file.o 00:01:50.011 CC lib/util/hexlify.o 00:01:50.011 CC lib/util/iov.o 00:01:50.011 CC lib/util/math.o 00:01:50.011 CC lib/util/pipe.o 00:01:50.011 CC lib/util/strerror_tls.o 00:01:50.011 CC lib/util/string.o 00:01:50.011 CC lib/util/uuid.o 00:01:50.011 CC lib/util/fd_group.o 00:01:50.011 CC lib/util/xor.o 00:01:50.011 CC lib/util/zipf.o 00:01:50.011 CC lib/vfio_user/host/vfio_user_pci.o 00:01:50.011 CC lib/vfio_user/host/vfio_user.o 00:01:50.011 LIB libspdk_dma.a 00:01:50.268 SO libspdk_dma.so.3.0 00:01:50.268 SYMLINK libspdk_dma.so 00:01:50.268 LIB libspdk_ioat.a 00:01:50.268 SO libspdk_ioat.so.6.0 00:01:50.268 SYMLINK libspdk_ioat.so 00:01:50.268 LIB libspdk_vfio_user.a 00:01:50.268 SO libspdk_vfio_user.so.4.0 00:01:50.526 SYMLINK libspdk_vfio_user.so 00:01:50.526 LIB libspdk_util.a 00:01:50.526 SO libspdk_util.so.8.0 00:01:50.784 SYMLINK libspdk_util.so 00:01:50.784 CC lib/vmd/vmd.o 00:01:50.784 CC lib/rdma/common.o 00:01:50.784 CC lib/env_dpdk/env.o 00:01:50.784 CC lib/json/json_parse.o 00:01:50.784 CC lib/conf/conf.o 00:01:50.784 CC lib/env_dpdk/memory.o 00:01:50.784 CC lib/vmd/led.o 00:01:50.784 CC lib/idxd/idxd.o 00:01:50.784 CC lib/rdma/rdma_verbs.o 00:01:50.784 CC lib/env_dpdk/pci.o 00:01:50.784 CC lib/json/json_util.o 00:01:50.784 CC lib/idxd/idxd_user.o 00:01:50.784 CC lib/env_dpdk/init.o 00:01:50.784 CC lib/json/json_write.o 00:01:50.784 CC lib/env_dpdk/threads.o 00:01:50.784 CC lib/idxd/idxd_kernel.o 00:01:50.784 CC lib/env_dpdk/pci_ioat.o 00:01:50.784 CC lib/env_dpdk/pci_virtio.o 00:01:50.784 CC lib/env_dpdk/pci_vmd.o 00:01:50.784 CC lib/env_dpdk/pci_idxd.o 00:01:50.784 CC lib/env_dpdk/pci_event.o 00:01:50.784 CC lib/env_dpdk/sigbus_handler.o 00:01:50.784 CC lib/env_dpdk/pci_dpdk.o 00:01:50.784 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:50.784 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:51.041 LIB libspdk_trace_parser.a 00:01:51.041 SO libspdk_trace_parser.so.4.0 00:01:51.041 LIB libspdk_conf.a 00:01:51.041 SO libspdk_conf.so.5.0 00:01:51.041 SYMLINK libspdk_trace_parser.so 00:01:51.041 LIB libspdk_rdma.a 00:01:51.042 SYMLINK libspdk_conf.so 00:01:51.042 SO libspdk_rdma.so.5.0 00:01:51.042 LIB libspdk_json.a 00:01:51.299 SO libspdk_json.so.5.1 00:01:51.299 SYMLINK libspdk_rdma.so 00:01:51.299 SYMLINK libspdk_json.so 00:01:51.299 CC lib/jsonrpc/jsonrpc_server.o 00:01:51.299 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:51.299 CC lib/jsonrpc/jsonrpc_client.o 00:01:51.299 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:51.299 LIB libspdk_idxd.a 00:01:51.299 SO libspdk_idxd.so.11.0 00:01:51.557 SYMLINK libspdk_idxd.so 00:01:51.557 LIB libspdk_vmd.a 00:01:51.557 SO libspdk_vmd.so.5.0 00:01:51.557 LIB libspdk_jsonrpc.a 00:01:51.557 SYMLINK libspdk_vmd.so 00:01:51.557 SO libspdk_jsonrpc.so.5.1 00:01:51.814 SYMLINK libspdk_jsonrpc.so 00:01:51.814 CC lib/rpc/rpc.o 00:01:52.072 LIB libspdk_rpc.a 00:01:52.072 SO libspdk_rpc.so.5.0 00:01:52.072 SYMLINK libspdk_rpc.so 00:01:52.330 CC lib/trace/trace.o 00:01:52.330 CC lib/trace/trace_flags.o 00:01:52.330 CC lib/trace/trace_rpc.o 00:01:52.330 CC lib/notify/notify.o 00:01:52.330 CC lib/notify/notify_rpc.o 00:01:52.330 CC lib/sock/sock.o 00:01:52.330 CC lib/sock/sock_rpc.o 00:01:52.330 LIB libspdk_notify.a 00:01:52.330 SO libspdk_notify.so.5.0 00:01:52.330 LIB libspdk_trace.a 00:01:52.330 SYMLINK libspdk_notify.so 00:01:52.587 SO libspdk_trace.so.9.0 00:01:52.587 SYMLINK libspdk_trace.so 00:01:52.587 LIB libspdk_sock.a 00:01:52.587 CC lib/thread/thread.o 00:01:52.587 CC lib/thread/iobuf.o 00:01:52.587 SO libspdk_sock.so.8.0 00:01:52.587 SYMLINK libspdk_sock.so 00:01:52.845 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:52.845 CC lib/nvme/nvme_ctrlr.o 00:01:52.845 CC lib/nvme/nvme_fabric.o 00:01:52.845 CC lib/nvme/nvme_ns_cmd.o 00:01:52.845 CC lib/nvme/nvme_ns.o 00:01:52.845 CC lib/nvme/nvme_pcie_common.o 00:01:52.845 CC lib/nvme/nvme_pcie.o 00:01:52.845 CC lib/nvme/nvme_qpair.o 00:01:52.845 CC lib/nvme/nvme.o 00:01:52.845 CC lib/nvme/nvme_quirks.o 00:01:52.845 CC lib/nvme/nvme_transport.o 00:01:52.845 CC lib/nvme/nvme_discovery.o 00:01:52.845 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:52.845 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:52.845 CC lib/nvme/nvme_tcp.o 00:01:52.845 CC lib/nvme/nvme_opal.o 00:01:52.845 CC lib/nvme/nvme_io_msg.o 00:01:52.845 CC lib/nvme/nvme_poll_group.o 00:01:52.845 CC lib/nvme/nvme_zns.o 00:01:52.845 CC lib/nvme/nvme_cuse.o 00:01:52.845 CC lib/nvme/nvme_rdma.o 00:01:52.845 CC lib/nvme/nvme_vfio_user.o 00:01:52.845 LIB libspdk_env_dpdk.a 00:01:52.845 SO libspdk_env_dpdk.so.13.0 00:01:53.102 SYMLINK libspdk_env_dpdk.so 00:01:54.476 LIB libspdk_thread.a 00:01:54.476 SO libspdk_thread.so.9.0 00:01:54.476 SYMLINK libspdk_thread.so 00:01:54.476 CC lib/blob/blobstore.o 00:01:54.476 CC lib/virtio/virtio.o 00:01:54.476 CC lib/blob/request.o 00:01:54.476 CC lib/blob/zeroes.o 00:01:54.476 CC lib/virtio/virtio_vhost_user.o 00:01:54.476 CC lib/accel/accel.o 00:01:54.476 CC lib/accel/accel_rpc.o 00:01:54.476 CC lib/virtio/virtio_vfio_user.o 00:01:54.476 CC lib/blob/blob_bs_dev.o 00:01:54.476 CC lib/virtio/virtio_pci.o 00:01:54.476 CC lib/accel/accel_sw.o 00:01:54.476 CC lib/init/json_config.o 00:01:54.476 CC lib/init/subsystem.o 00:01:54.476 CC lib/init/subsystem_rpc.o 00:01:54.476 CC lib/init/rpc.o 00:01:54.733 LIB libspdk_init.a 00:01:54.733 SO libspdk_init.so.4.0 00:01:54.733 LIB libspdk_virtio.a 00:01:54.733 SYMLINK libspdk_init.so 00:01:54.733 SO libspdk_virtio.so.6.0 00:01:54.733 SYMLINK libspdk_virtio.so 00:01:54.733 CC lib/event/app.o 00:01:54.733 CC lib/event/reactor.o 00:01:54.733 CC lib/event/log_rpc.o 00:01:54.733 CC lib/event/app_rpc.o 00:01:54.733 CC lib/event/scheduler_static.o 00:01:55.299 LIB libspdk_nvme.a 00:01:55.299 LIB libspdk_event.a 00:01:55.299 SO libspdk_nvme.so.12.0 00:01:55.299 SO libspdk_event.so.12.0 00:01:55.299 SYMLINK libspdk_event.so 00:01:55.299 LIB libspdk_accel.a 00:01:55.299 SO libspdk_accel.so.14.0 00:01:55.558 SYMLINK libspdk_nvme.so 00:01:55.558 SYMLINK libspdk_accel.so 00:01:55.558 CC lib/bdev/bdev.o 00:01:55.558 CC lib/bdev/bdev_rpc.o 00:01:55.558 CC lib/bdev/bdev_zone.o 00:01:55.558 CC lib/bdev/part.o 00:01:55.558 CC lib/bdev/scsi_nvme.o 00:01:57.459 LIB libspdk_blob.a 00:01:57.459 SO libspdk_blob.so.10.1 00:01:57.459 SYMLINK libspdk_blob.so 00:01:57.459 CC lib/blobfs/blobfs.o 00:01:57.459 CC lib/blobfs/tree.o 00:01:57.459 CC lib/lvol/lvol.o 00:01:58.025 LIB libspdk_bdev.a 00:01:58.025 LIB libspdk_blobfs.a 00:01:58.025 SO libspdk_bdev.so.14.0 00:01:58.025 SO libspdk_blobfs.so.9.0 00:01:58.025 SYMLINK libspdk_blobfs.so 00:01:58.294 LIB libspdk_lvol.a 00:01:58.294 SYMLINK libspdk_bdev.so 00:01:58.294 SO libspdk_lvol.so.9.1 00:01:58.294 SYMLINK libspdk_lvol.so 00:01:58.294 CC lib/nvmf/ctrlr.o 00:01:58.294 CC lib/ublk/ublk.o 00:01:58.294 CC lib/nbd/nbd.o 00:01:58.294 CC lib/nvmf/ctrlr_discovery.o 00:01:58.294 CC lib/ublk/ublk_rpc.o 00:01:58.294 CC lib/nbd/nbd_rpc.o 00:01:58.294 CC lib/nvmf/ctrlr_bdev.o 00:01:58.294 CC lib/nvmf/subsystem.o 00:01:58.294 CC lib/ftl/ftl_core.o 00:01:58.294 CC lib/nvmf/nvmf.o 00:01:58.294 CC lib/ftl/ftl_init.o 00:01:58.294 CC lib/nvmf/nvmf_rpc.o 00:01:58.294 CC lib/ftl/ftl_layout.o 00:01:58.294 CC lib/nvmf/transport.o 00:01:58.294 CC lib/scsi/dev.o 00:01:58.294 CC lib/ftl/ftl_debug.o 00:01:58.294 CC lib/nvmf/tcp.o 00:01:58.294 CC lib/ftl/ftl_io.o 00:01:58.294 CC lib/scsi/lun.o 00:01:58.294 CC lib/nvmf/rdma.o 00:01:58.294 CC lib/ftl/ftl_sb.o 00:01:58.294 CC lib/scsi/port.o 00:01:58.294 CC lib/scsi/scsi.o 00:01:58.294 CC lib/ftl/ftl_l2p.o 00:01:58.294 CC lib/ftl/ftl_l2p_flat.o 00:01:58.294 CC lib/scsi/scsi_bdev.o 00:01:58.294 CC lib/scsi/scsi_pr.o 00:01:58.294 CC lib/scsi/scsi_rpc.o 00:01:58.294 CC lib/ftl/ftl_nv_cache.o 00:01:58.294 CC lib/scsi/task.o 00:01:58.294 CC lib/ftl/ftl_band.o 00:01:58.294 CC lib/ftl/ftl_band_ops.o 00:01:58.294 CC lib/ftl/ftl_writer.o 00:01:58.294 CC lib/ftl/ftl_rq.o 00:01:58.294 CC lib/ftl/ftl_reloc.o 00:01:58.294 CC lib/ftl/ftl_l2p_cache.o 00:01:58.294 CC lib/ftl/ftl_p2l.o 00:01:58.294 CC lib/ftl/mngt/ftl_mngt.o 00:01:58.294 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:58.294 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:58.294 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:58.294 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:58.294 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:58.294 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:58.294 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:58.294 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:58.294 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:58.294 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:58.553 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:58.553 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:58.553 CC lib/ftl/utils/ftl_conf.o 00:01:58.553 CC lib/ftl/utils/ftl_md.o 00:01:58.553 CC lib/ftl/utils/ftl_mempool.o 00:01:58.553 CC lib/ftl/utils/ftl_bitmap.o 00:01:58.553 CC lib/ftl/utils/ftl_property.o 00:01:58.553 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:58.811 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:58.811 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:58.811 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:58.811 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:58.811 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:58.811 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:58.811 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:58.811 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:58.811 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:58.811 CC lib/ftl/base/ftl_base_dev.o 00:01:58.811 CC lib/ftl/base/ftl_base_bdev.o 00:01:58.811 CC lib/ftl/ftl_trace.o 00:01:59.070 LIB libspdk_nbd.a 00:01:59.070 SO libspdk_nbd.so.6.0 00:01:59.070 LIB libspdk_scsi.a 00:01:59.070 SO libspdk_scsi.so.8.0 00:01:59.070 SYMLINK libspdk_nbd.so 00:01:59.070 SYMLINK libspdk_scsi.so 00:01:59.328 LIB libspdk_ublk.a 00:01:59.328 SO libspdk_ublk.so.2.0 00:01:59.328 CC lib/vhost/vhost.o 00:01:59.328 CC lib/iscsi/conn.o 00:01:59.328 CC lib/vhost/vhost_rpc.o 00:01:59.328 CC lib/iscsi/init_grp.o 00:01:59.328 CC lib/vhost/vhost_scsi.o 00:01:59.328 CC lib/iscsi/iscsi.o 00:01:59.328 CC lib/vhost/vhost_blk.o 00:01:59.328 CC lib/vhost/rte_vhost_user.o 00:01:59.328 CC lib/iscsi/md5.o 00:01:59.328 CC lib/iscsi/param.o 00:01:59.328 CC lib/iscsi/portal_grp.o 00:01:59.328 CC lib/iscsi/tgt_node.o 00:01:59.328 CC lib/iscsi/iscsi_subsystem.o 00:01:59.329 CC lib/iscsi/iscsi_rpc.o 00:01:59.329 CC lib/iscsi/task.o 00:01:59.329 SYMLINK libspdk_ublk.so 00:01:59.587 LIB libspdk_ftl.a 00:01:59.587 SO libspdk_ftl.so.8.0 00:02:00.255 SYMLINK libspdk_ftl.so 00:02:00.539 LIB libspdk_vhost.a 00:02:00.539 SO libspdk_vhost.so.7.1 00:02:00.539 SYMLINK libspdk_vhost.so 00:02:00.797 LIB libspdk_iscsi.a 00:02:00.797 LIB libspdk_nvmf.a 00:02:00.797 SO libspdk_iscsi.so.7.0 00:02:00.797 SO libspdk_nvmf.so.17.0 00:02:00.797 SYMLINK libspdk_iscsi.so 00:02:01.055 SYMLINK libspdk_nvmf.so 00:02:01.055 CC module/env_dpdk/env_dpdk_rpc.o 00:02:01.313 CC module/sock/posix/posix.o 00:02:01.313 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:01.313 CC module/accel/iaa/accel_iaa.o 00:02:01.313 CC module/blob/bdev/blob_bdev.o 00:02:01.313 CC module/accel/error/accel_error.o 00:02:01.313 CC module/accel/iaa/accel_iaa_rpc.o 00:02:01.313 CC module/scheduler/gscheduler/gscheduler.o 00:02:01.313 CC module/accel/error/accel_error_rpc.o 00:02:01.313 CC module/accel/ioat/accel_ioat.o 00:02:01.313 CC module/accel/dsa/accel_dsa.o 00:02:01.313 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:01.313 CC module/accel/ioat/accel_ioat_rpc.o 00:02:01.313 CC module/accel/dsa/accel_dsa_rpc.o 00:02:01.313 LIB libspdk_env_dpdk_rpc.a 00:02:01.313 SO libspdk_env_dpdk_rpc.so.5.0 00:02:01.313 SYMLINK libspdk_env_dpdk_rpc.so 00:02:01.313 LIB libspdk_scheduler_gscheduler.a 00:02:01.313 LIB libspdk_scheduler_dpdk_governor.a 00:02:01.313 SO libspdk_scheduler_gscheduler.so.3.0 00:02:01.313 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:01.313 LIB libspdk_accel_error.a 00:02:01.313 LIB libspdk_accel_ioat.a 00:02:01.313 LIB libspdk_scheduler_dynamic.a 00:02:01.313 SO libspdk_accel_error.so.1.0 00:02:01.313 LIB libspdk_accel_iaa.a 00:02:01.313 SO libspdk_accel_ioat.so.5.0 00:02:01.313 SO libspdk_scheduler_dynamic.so.3.0 00:02:01.313 SYMLINK libspdk_scheduler_gscheduler.so 00:02:01.313 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:01.571 SO libspdk_accel_iaa.so.2.0 00:02:01.571 LIB libspdk_accel_dsa.a 00:02:01.571 SYMLINK libspdk_accel_error.so 00:02:01.571 LIB libspdk_blob_bdev.a 00:02:01.571 SYMLINK libspdk_scheduler_dynamic.so 00:02:01.571 SYMLINK libspdk_accel_ioat.so 00:02:01.571 SO libspdk_accel_dsa.so.4.0 00:02:01.571 SO libspdk_blob_bdev.so.10.1 00:02:01.571 SYMLINK libspdk_accel_iaa.so 00:02:01.571 SYMLINK libspdk_accel_dsa.so 00:02:01.571 SYMLINK libspdk_blob_bdev.so 00:02:01.829 CC module/bdev/lvol/vbdev_lvol.o 00:02:01.829 CC module/bdev/null/bdev_null.o 00:02:01.829 CC module/bdev/split/vbdev_split.o 00:02:01.829 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:01.829 CC module/bdev/raid/bdev_raid.o 00:02:01.829 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:01.829 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:01.829 CC module/bdev/split/vbdev_split_rpc.o 00:02:01.829 CC module/bdev/malloc/bdev_malloc.o 00:02:01.829 CC module/bdev/raid/bdev_raid_rpc.o 00:02:01.829 CC module/bdev/nvme/bdev_nvme.o 00:02:01.829 CC module/bdev/null/bdev_null_rpc.o 00:02:01.829 CC module/bdev/gpt/gpt.o 00:02:01.829 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:01.829 CC module/bdev/delay/vbdev_delay.o 00:02:01.829 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:01.829 CC module/bdev/error/vbdev_error.o 00:02:01.829 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:01.829 CC module/bdev/raid/bdev_raid_sb.o 00:02:01.829 CC module/bdev/iscsi/bdev_iscsi.o 00:02:01.829 CC module/bdev/passthru/vbdev_passthru.o 00:02:01.829 CC module/bdev/aio/bdev_aio.o 00:02:01.829 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:01.829 CC module/bdev/error/vbdev_error_rpc.o 00:02:01.829 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:01.829 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:01.829 CC module/bdev/raid/raid0.o 00:02:01.829 CC module/bdev/raid/raid1.o 00:02:01.829 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:01.829 CC module/bdev/gpt/vbdev_gpt.o 00:02:01.829 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:01.829 CC module/bdev/aio/bdev_aio_rpc.o 00:02:01.829 CC module/bdev/nvme/nvme_rpc.o 00:02:01.829 CC module/bdev/nvme/bdev_mdns_client.o 00:02:01.829 CC module/bdev/raid/concat.o 00:02:01.829 CC module/bdev/nvme/vbdev_opal.o 00:02:01.829 CC module/blobfs/bdev/blobfs_bdev.o 00:02:01.829 CC module/bdev/ftl/bdev_ftl.o 00:02:01.829 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:01.829 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:01.829 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:01.829 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:02.087 LIB libspdk_blobfs_bdev.a 00:02:02.087 SO libspdk_blobfs_bdev.so.5.0 00:02:02.087 LIB libspdk_sock_posix.a 00:02:02.087 SO libspdk_sock_posix.so.5.0 00:02:02.087 LIB libspdk_bdev_split.a 00:02:02.087 SYMLINK libspdk_blobfs_bdev.so 00:02:02.087 SO libspdk_bdev_split.so.5.0 00:02:02.087 LIB libspdk_bdev_null.a 00:02:02.087 LIB libspdk_bdev_gpt.a 00:02:02.087 LIB libspdk_bdev_passthru.a 00:02:02.087 LIB libspdk_bdev_ftl.a 00:02:02.087 SYMLINK libspdk_sock_posix.so 00:02:02.087 SO libspdk_bdev_null.so.5.0 00:02:02.087 SO libspdk_bdev_gpt.so.5.0 00:02:02.347 LIB libspdk_bdev_error.a 00:02:02.347 SO libspdk_bdev_passthru.so.5.0 00:02:02.347 SO libspdk_bdev_ftl.so.5.0 00:02:02.347 SYMLINK libspdk_bdev_split.so 00:02:02.347 LIB libspdk_bdev_iscsi.a 00:02:02.347 LIB libspdk_bdev_zone_block.a 00:02:02.347 SO libspdk_bdev_error.so.5.0 00:02:02.347 LIB libspdk_bdev_delay.a 00:02:02.347 SYMLINK libspdk_bdev_null.so 00:02:02.347 SO libspdk_bdev_zone_block.so.5.0 00:02:02.347 SO libspdk_bdev_iscsi.so.5.0 00:02:02.347 SYMLINK libspdk_bdev_gpt.so 00:02:02.347 SYMLINK libspdk_bdev_passthru.so 00:02:02.347 LIB libspdk_bdev_aio.a 00:02:02.347 SYMLINK libspdk_bdev_ftl.so 00:02:02.347 SO libspdk_bdev_delay.so.5.0 00:02:02.347 SYMLINK libspdk_bdev_error.so 00:02:02.347 SO libspdk_bdev_aio.so.5.0 00:02:02.347 LIB libspdk_bdev_malloc.a 00:02:02.347 SYMLINK libspdk_bdev_zone_block.so 00:02:02.347 SYMLINK libspdk_bdev_iscsi.so 00:02:02.347 SO libspdk_bdev_malloc.so.5.0 00:02:02.347 SYMLINK libspdk_bdev_delay.so 00:02:02.347 SYMLINK libspdk_bdev_aio.so 00:02:02.347 LIB libspdk_bdev_lvol.a 00:02:02.347 SYMLINK libspdk_bdev_malloc.so 00:02:02.347 SO libspdk_bdev_lvol.so.5.0 00:02:02.347 LIB libspdk_bdev_virtio.a 00:02:02.347 SO libspdk_bdev_virtio.so.5.0 00:02:02.347 SYMLINK libspdk_bdev_lvol.so 00:02:02.605 SYMLINK libspdk_bdev_virtio.so 00:02:02.863 LIB libspdk_bdev_raid.a 00:02:02.863 SO libspdk_bdev_raid.so.5.0 00:02:02.863 SYMLINK libspdk_bdev_raid.so 00:02:04.234 LIB libspdk_bdev_nvme.a 00:02:04.234 SO libspdk_bdev_nvme.so.6.0 00:02:04.234 SYMLINK libspdk_bdev_nvme.so 00:02:04.493 CC module/event/subsystems/iobuf/iobuf.o 00:02:04.493 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:04.493 CC module/event/subsystems/vmd/vmd.o 00:02:04.493 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:04.493 CC module/event/subsystems/sock/sock.o 00:02:04.493 CC module/event/subsystems/scheduler/scheduler.o 00:02:04.493 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:04.493 LIB libspdk_event_sock.a 00:02:04.493 LIB libspdk_event_vhost_blk.a 00:02:04.493 LIB libspdk_event_scheduler.a 00:02:04.493 LIB libspdk_event_vmd.a 00:02:04.493 LIB libspdk_event_iobuf.a 00:02:04.493 SO libspdk_event_vhost_blk.so.2.0 00:02:04.493 SO libspdk_event_sock.so.4.0 00:02:04.493 SO libspdk_event_scheduler.so.3.0 00:02:04.493 SO libspdk_event_vmd.so.5.0 00:02:04.493 SO libspdk_event_iobuf.so.2.0 00:02:04.493 SYMLINK libspdk_event_sock.so 00:02:04.493 SYMLINK libspdk_event_vhost_blk.so 00:02:04.493 SYMLINK libspdk_event_scheduler.so 00:02:04.493 SYMLINK libspdk_event_vmd.so 00:02:04.493 SYMLINK libspdk_event_iobuf.so 00:02:04.751 CC module/event/subsystems/accel/accel.o 00:02:04.751 LIB libspdk_event_accel.a 00:02:05.008 SO libspdk_event_accel.so.5.0 00:02:05.008 SYMLINK libspdk_event_accel.so 00:02:05.008 CC module/event/subsystems/bdev/bdev.o 00:02:05.266 LIB libspdk_event_bdev.a 00:02:05.266 SO libspdk_event_bdev.so.5.0 00:02:05.266 SYMLINK libspdk_event_bdev.so 00:02:05.524 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:05.524 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:05.524 CC module/event/subsystems/scsi/scsi.o 00:02:05.524 CC module/event/subsystems/nbd/nbd.o 00:02:05.524 CC module/event/subsystems/ublk/ublk.o 00:02:05.524 LIB libspdk_event_ublk.a 00:02:05.524 LIB libspdk_event_nbd.a 00:02:05.524 LIB libspdk_event_scsi.a 00:02:05.524 SO libspdk_event_ublk.so.2.0 00:02:05.524 SO libspdk_event_nbd.so.5.0 00:02:05.524 SO libspdk_event_scsi.so.5.0 00:02:05.524 SYMLINK libspdk_event_ublk.so 00:02:05.524 SYMLINK libspdk_event_nbd.so 00:02:05.524 LIB libspdk_event_nvmf.a 00:02:05.782 SYMLINK libspdk_event_scsi.so 00:02:05.782 SO libspdk_event_nvmf.so.5.0 00:02:05.782 SYMLINK libspdk_event_nvmf.so 00:02:05.782 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:05.782 CC module/event/subsystems/iscsi/iscsi.o 00:02:05.782 LIB libspdk_event_vhost_scsi.a 00:02:06.040 SO libspdk_event_vhost_scsi.so.2.0 00:02:06.040 LIB libspdk_event_iscsi.a 00:02:06.040 SO libspdk_event_iscsi.so.5.0 00:02:06.040 SYMLINK libspdk_event_vhost_scsi.so 00:02:06.040 SYMLINK libspdk_event_iscsi.so 00:02:06.040 SO libspdk.so.5.0 00:02:06.040 SYMLINK libspdk.so 00:02:06.302 CC app/trace_record/trace_record.o 00:02:06.302 CC app/spdk_nvme_identify/identify.o 00:02:06.302 CXX app/trace/trace.o 00:02:06.302 CC app/spdk_nvme_perf/perf.o 00:02:06.302 CC app/spdk_lspci/spdk_lspci.o 00:02:06.302 CC app/spdk_top/spdk_top.o 00:02:06.302 CC app/spdk_nvme_discover/discovery_aer.o 00:02:06.302 CC test/rpc_client/rpc_client_test.o 00:02:06.302 TEST_HEADER include/spdk/accel.h 00:02:06.302 TEST_HEADER include/spdk/accel_module.h 00:02:06.302 TEST_HEADER include/spdk/assert.h 00:02:06.302 TEST_HEADER include/spdk/barrier.h 00:02:06.302 TEST_HEADER include/spdk/base64.h 00:02:06.302 TEST_HEADER include/spdk/bdev.h 00:02:06.302 TEST_HEADER include/spdk/bdev_module.h 00:02:06.302 TEST_HEADER include/spdk/bdev_zone.h 00:02:06.302 TEST_HEADER include/spdk/bit_array.h 00:02:06.302 TEST_HEADER include/spdk/bit_pool.h 00:02:06.302 TEST_HEADER include/spdk/blob_bdev.h 00:02:06.302 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:06.302 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:06.302 TEST_HEADER include/spdk/blobfs.h 00:02:06.302 TEST_HEADER include/spdk/blob.h 00:02:06.302 CC app/spdk_dd/spdk_dd.o 00:02:06.302 TEST_HEADER include/spdk/conf.h 00:02:06.303 TEST_HEADER include/spdk/config.h 00:02:06.303 CC app/nvmf_tgt/nvmf_main.o 00:02:06.303 TEST_HEADER include/spdk/cpuset.h 00:02:06.303 TEST_HEADER include/spdk/crc16.h 00:02:06.303 TEST_HEADER include/spdk/crc32.h 00:02:06.303 CC app/iscsi_tgt/iscsi_tgt.o 00:02:06.303 TEST_HEADER include/spdk/crc64.h 00:02:06.303 CC examples/vmd/lsvmd/lsvmd.o 00:02:06.303 CC examples/ioat/perf/perf.o 00:02:06.303 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:06.303 CC app/vhost/vhost.o 00:02:06.303 CC examples/nvme/reconnect/reconnect.o 00:02:06.303 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:06.303 TEST_HEADER include/spdk/dif.h 00:02:06.303 CC examples/ioat/verify/verify.o 00:02:06.303 CC examples/nvme/hotplug/hotplug.o 00:02:06.303 CC examples/util/zipf/zipf.o 00:02:06.303 CC examples/vmd/led/led.o 00:02:06.303 CC app/fio/nvme/fio_plugin.o 00:02:06.303 TEST_HEADER include/spdk/dma.h 00:02:06.303 CC examples/nvme/arbitration/arbitration.o 00:02:06.303 CC examples/nvme/abort/abort.o 00:02:06.303 TEST_HEADER include/spdk/endian.h 00:02:06.303 CC examples/accel/perf/accel_perf.o 00:02:06.303 CC examples/idxd/perf/perf.o 00:02:06.303 CC examples/nvme/hello_world/hello_world.o 00:02:06.303 TEST_HEADER include/spdk/env_dpdk.h 00:02:06.303 CC test/thread/poller_perf/poller_perf.o 00:02:06.303 CC examples/sock/hello_world/hello_sock.o 00:02:06.303 CC app/spdk_tgt/spdk_tgt.o 00:02:06.303 TEST_HEADER include/spdk/env.h 00:02:06.303 CC test/event/event_perf/event_perf.o 00:02:06.303 TEST_HEADER include/spdk/event.h 00:02:06.303 CC test/nvme/aer/aer.o 00:02:06.303 TEST_HEADER include/spdk/fd_group.h 00:02:06.303 TEST_HEADER include/spdk/fd.h 00:02:06.303 TEST_HEADER include/spdk/file.h 00:02:06.303 TEST_HEADER include/spdk/ftl.h 00:02:06.303 TEST_HEADER include/spdk/gpt_spec.h 00:02:06.303 TEST_HEADER include/spdk/hexlify.h 00:02:06.303 TEST_HEADER include/spdk/histogram_data.h 00:02:06.303 CC examples/bdev/hello_world/hello_bdev.o 00:02:06.303 TEST_HEADER include/spdk/idxd.h 00:02:06.303 CC examples/thread/thread/thread_ex.o 00:02:06.303 TEST_HEADER include/spdk/idxd_spec.h 00:02:06.303 CC examples/blob/cli/blobcli.o 00:02:06.303 CC test/dma/test_dma/test_dma.o 00:02:06.303 CC test/accel/dif/dif.o 00:02:06.303 CC app/fio/bdev/fio_plugin.o 00:02:06.303 CC test/bdev/bdevio/bdevio.o 00:02:06.303 CC examples/blob/hello_world/hello_blob.o 00:02:06.303 CC examples/bdev/bdevperf/bdevperf.o 00:02:06.303 TEST_HEADER include/spdk/init.h 00:02:06.303 CC examples/nvmf/nvmf/nvmf.o 00:02:06.564 CC test/app/bdev_svc/bdev_svc.o 00:02:06.564 TEST_HEADER include/spdk/ioat.h 00:02:06.564 TEST_HEADER include/spdk/ioat_spec.h 00:02:06.564 TEST_HEADER include/spdk/iscsi_spec.h 00:02:06.564 TEST_HEADER include/spdk/json.h 00:02:06.564 TEST_HEADER include/spdk/jsonrpc.h 00:02:06.564 TEST_HEADER include/spdk/likely.h 00:02:06.564 TEST_HEADER include/spdk/log.h 00:02:06.564 TEST_HEADER include/spdk/lvol.h 00:02:06.564 TEST_HEADER include/spdk/memory.h 00:02:06.564 TEST_HEADER include/spdk/mmio.h 00:02:06.564 CC test/env/mem_callbacks/mem_callbacks.o 00:02:06.564 CC test/blobfs/mkfs/mkfs.o 00:02:06.564 TEST_HEADER include/spdk/nbd.h 00:02:06.564 CC test/lvol/esnap/esnap.o 00:02:06.564 TEST_HEADER include/spdk/notify.h 00:02:06.564 TEST_HEADER include/spdk/nvme.h 00:02:06.564 TEST_HEADER include/spdk/nvme_intel.h 00:02:06.564 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:06.564 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:06.564 TEST_HEADER include/spdk/nvme_spec.h 00:02:06.564 TEST_HEADER include/spdk/nvme_zns.h 00:02:06.564 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:06.564 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:06.564 TEST_HEADER include/spdk/nvmf.h 00:02:06.564 TEST_HEADER include/spdk/nvmf_spec.h 00:02:06.564 TEST_HEADER include/spdk/nvmf_transport.h 00:02:06.564 TEST_HEADER include/spdk/opal.h 00:02:06.564 TEST_HEADER include/spdk/opal_spec.h 00:02:06.564 TEST_HEADER include/spdk/pci_ids.h 00:02:06.564 TEST_HEADER include/spdk/pipe.h 00:02:06.564 TEST_HEADER include/spdk/queue.h 00:02:06.564 LINK spdk_lspci 00:02:06.564 TEST_HEADER include/spdk/reduce.h 00:02:06.564 TEST_HEADER include/spdk/rpc.h 00:02:06.564 TEST_HEADER include/spdk/scheduler.h 00:02:06.564 TEST_HEADER include/spdk/scsi.h 00:02:06.564 TEST_HEADER include/spdk/scsi_spec.h 00:02:06.564 TEST_HEADER include/spdk/sock.h 00:02:06.564 TEST_HEADER include/spdk/stdinc.h 00:02:06.564 TEST_HEADER include/spdk/string.h 00:02:06.564 TEST_HEADER include/spdk/thread.h 00:02:06.564 TEST_HEADER include/spdk/trace.h 00:02:06.564 TEST_HEADER include/spdk/trace_parser.h 00:02:06.564 TEST_HEADER include/spdk/tree.h 00:02:06.564 TEST_HEADER include/spdk/ublk.h 00:02:06.564 TEST_HEADER include/spdk/util.h 00:02:06.564 TEST_HEADER include/spdk/uuid.h 00:02:06.564 TEST_HEADER include/spdk/version.h 00:02:06.564 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:06.564 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:06.564 TEST_HEADER include/spdk/vhost.h 00:02:06.564 TEST_HEADER include/spdk/vmd.h 00:02:06.564 TEST_HEADER include/spdk/xor.h 00:02:06.564 TEST_HEADER include/spdk/zipf.h 00:02:06.564 CXX test/cpp_headers/accel.o 00:02:06.564 LINK lsvmd 00:02:06.564 LINK rpc_client_test 00:02:06.564 LINK led 00:02:06.564 LINK zipf 00:02:06.565 LINK interrupt_tgt 00:02:06.565 LINK spdk_nvme_discover 00:02:06.565 LINK poller_perf 00:02:06.565 LINK event_perf 00:02:06.829 LINK nvmf_tgt 00:02:06.829 LINK vhost 00:02:06.829 LINK cmb_copy 00:02:06.829 LINK spdk_trace_record 00:02:06.829 LINK iscsi_tgt 00:02:06.829 LINK ioat_perf 00:02:06.829 LINK spdk_tgt 00:02:06.829 LINK bdev_svc 00:02:06.829 LINK verify 00:02:06.829 LINK hello_world 00:02:06.829 LINK hotplug 00:02:06.829 LINK hello_sock 00:02:06.829 LINK mkfs 00:02:06.829 LINK hello_bdev 00:02:06.829 LINK thread 00:02:06.829 LINK hello_blob 00:02:06.829 LINK aer 00:02:07.090 CXX test/cpp_headers/accel_module.o 00:02:07.090 LINK arbitration 00:02:07.090 LINK reconnect 00:02:07.090 LINK spdk_dd 00:02:07.090 LINK nvmf 00:02:07.090 LINK idxd_perf 00:02:07.090 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:07.090 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:07.090 LINK spdk_trace 00:02:07.090 CC test/nvme/reset/reset.o 00:02:07.090 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:07.090 CXX test/cpp_headers/assert.o 00:02:07.090 LINK abort 00:02:07.090 CC test/event/reactor/reactor.o 00:02:07.090 CC test/env/vtophys/vtophys.o 00:02:07.090 CC test/nvme/sgl/sgl.o 00:02:07.090 CC test/nvme/e2edp/nvme_dp.o 00:02:07.090 LINK dif 00:02:07.090 CC test/event/reactor_perf/reactor_perf.o 00:02:07.090 LINK test_dma 00:02:07.090 CC test/app/histogram_perf/histogram_perf.o 00:02:07.090 LINK bdevio 00:02:07.090 CXX test/cpp_headers/barrier.o 00:02:07.090 CC test/nvme/overhead/overhead.o 00:02:07.090 CC test/nvme/err_injection/err_injection.o 00:02:07.090 CC test/app/jsoncat/jsoncat.o 00:02:07.090 CXX test/cpp_headers/base64.o 00:02:07.355 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:07.355 LINK accel_perf 00:02:07.355 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:07.355 CXX test/cpp_headers/bdev.o 00:02:07.355 CC test/event/app_repeat/app_repeat.o 00:02:07.355 CC test/nvme/startup/startup.o 00:02:07.355 LINK nvme_manage 00:02:07.355 CXX test/cpp_headers/bdev_module.o 00:02:07.355 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:07.355 CXX test/cpp_headers/bdev_zone.o 00:02:07.355 CC test/event/scheduler/scheduler.o 00:02:07.355 LINK spdk_bdev 00:02:07.355 LINK pmr_persistence 00:02:07.355 LINK spdk_nvme 00:02:07.355 LINK reactor 00:02:07.355 CXX test/cpp_headers/bit_array.o 00:02:07.355 CC test/nvme/simple_copy/simple_copy.o 00:02:07.355 CC test/nvme/reserve/reserve.o 00:02:07.355 LINK vtophys 00:02:07.355 CC test/app/stub/stub.o 00:02:07.355 CC test/env/pci/pci_ut.o 00:02:07.355 CXX test/cpp_headers/bit_pool.o 00:02:07.355 CC test/env/memory/memory_ut.o 00:02:07.355 LINK blobcli 00:02:07.355 LINK reactor_perf 00:02:07.355 LINK histogram_perf 00:02:07.618 CXX test/cpp_headers/blob_bdev.o 00:02:07.618 LINK jsoncat 00:02:07.618 CC test/nvme/connect_stress/connect_stress.o 00:02:07.618 CXX test/cpp_headers/blobfs_bdev.o 00:02:07.618 CC test/nvme/boot_partition/boot_partition.o 00:02:07.618 CXX test/cpp_headers/blobfs.o 00:02:07.618 CC test/nvme/compliance/nvme_compliance.o 00:02:07.618 CXX test/cpp_headers/blob.o 00:02:07.618 LINK reset 00:02:07.618 LINK env_dpdk_post_init 00:02:07.618 LINK app_repeat 00:02:07.618 LINK err_injection 00:02:07.618 CXX test/cpp_headers/conf.o 00:02:07.618 CXX test/cpp_headers/config.o 00:02:07.618 LINK startup 00:02:07.618 CC test/nvme/fused_ordering/fused_ordering.o 00:02:07.618 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:07.618 CXX test/cpp_headers/cpuset.o 00:02:07.618 LINK sgl 00:02:07.618 CXX test/cpp_headers/crc16.o 00:02:07.618 CC test/nvme/fdp/fdp.o 00:02:07.618 CC test/nvme/cuse/cuse.o 00:02:07.618 LINK mem_callbacks 00:02:07.618 LINK nvme_dp 00:02:07.618 CXX test/cpp_headers/crc32.o 00:02:07.618 CXX test/cpp_headers/crc64.o 00:02:07.618 CXX test/cpp_headers/dif.o 00:02:07.618 CXX test/cpp_headers/dma.o 00:02:07.618 CXX test/cpp_headers/endian.o 00:02:07.618 CXX test/cpp_headers/env_dpdk.o 00:02:07.618 CXX test/cpp_headers/env.o 00:02:07.618 CXX test/cpp_headers/event.o 00:02:07.618 LINK spdk_nvme_perf 00:02:07.877 CXX test/cpp_headers/fd_group.o 00:02:07.878 LINK scheduler 00:02:07.878 LINK stub 00:02:07.878 LINK overhead 00:02:07.878 LINK reserve 00:02:07.878 LINK nvme_fuzz 00:02:07.878 CXX test/cpp_headers/fd.o 00:02:07.878 CXX test/cpp_headers/file.o 00:02:07.878 LINK spdk_nvme_identify 00:02:07.878 LINK simple_copy 00:02:07.878 LINK bdevperf 00:02:07.878 LINK boot_partition 00:02:07.878 LINK connect_stress 00:02:07.878 CXX test/cpp_headers/ftl.o 00:02:07.878 CXX test/cpp_headers/gpt_spec.o 00:02:07.878 CXX test/cpp_headers/hexlify.o 00:02:07.878 CXX test/cpp_headers/histogram_data.o 00:02:07.878 CXX test/cpp_headers/idxd.o 00:02:07.878 CXX test/cpp_headers/idxd_spec.o 00:02:07.878 CXX test/cpp_headers/init.o 00:02:07.878 CXX test/cpp_headers/ioat.o 00:02:07.878 LINK spdk_top 00:02:07.878 CXX test/cpp_headers/ioat_spec.o 00:02:07.878 CXX test/cpp_headers/iscsi_spec.o 00:02:07.878 CXX test/cpp_headers/json.o 00:02:07.878 CXX test/cpp_headers/jsonrpc.o 00:02:07.878 CXX test/cpp_headers/likely.o 00:02:07.878 CXX test/cpp_headers/log.o 00:02:07.878 LINK doorbell_aers 00:02:08.139 CXX test/cpp_headers/lvol.o 00:02:08.139 LINK fused_ordering 00:02:08.139 CXX test/cpp_headers/memory.o 00:02:08.139 CXX test/cpp_headers/mmio.o 00:02:08.139 CXX test/cpp_headers/nbd.o 00:02:08.139 CXX test/cpp_headers/notify.o 00:02:08.139 CXX test/cpp_headers/nvme.o 00:02:08.139 CXX test/cpp_headers/nvme_intel.o 00:02:08.139 LINK vhost_fuzz 00:02:08.139 CXX test/cpp_headers/nvme_ocssd.o 00:02:08.139 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:08.139 CXX test/cpp_headers/nvme_spec.o 00:02:08.139 CXX test/cpp_headers/nvme_zns.o 00:02:08.139 CXX test/cpp_headers/nvmf_cmd.o 00:02:08.139 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:08.139 CXX test/cpp_headers/nvmf.o 00:02:08.139 CXX test/cpp_headers/nvmf_spec.o 00:02:08.139 CXX test/cpp_headers/nvmf_transport.o 00:02:08.139 CXX test/cpp_headers/opal.o 00:02:08.139 CXX test/cpp_headers/opal_spec.o 00:02:08.139 CXX test/cpp_headers/pci_ids.o 00:02:08.139 LINK pci_ut 00:02:08.139 CXX test/cpp_headers/pipe.o 00:02:08.139 CXX test/cpp_headers/queue.o 00:02:08.139 LINK nvme_compliance 00:02:08.139 CXX test/cpp_headers/reduce.o 00:02:08.139 CXX test/cpp_headers/rpc.o 00:02:08.139 CXX test/cpp_headers/scheduler.o 00:02:08.139 CXX test/cpp_headers/scsi.o 00:02:08.139 CXX test/cpp_headers/scsi_spec.o 00:02:08.139 CXX test/cpp_headers/sock.o 00:02:08.139 LINK fdp 00:02:08.139 CXX test/cpp_headers/stdinc.o 00:02:08.398 CXX test/cpp_headers/string.o 00:02:08.398 CXX test/cpp_headers/thread.o 00:02:08.398 CXX test/cpp_headers/trace.o 00:02:08.398 CXX test/cpp_headers/trace_parser.o 00:02:08.398 CXX test/cpp_headers/tree.o 00:02:08.398 CXX test/cpp_headers/ublk.o 00:02:08.398 CXX test/cpp_headers/util.o 00:02:08.398 CXX test/cpp_headers/uuid.o 00:02:08.398 CXX test/cpp_headers/version.o 00:02:08.398 CXX test/cpp_headers/vfio_user_pci.o 00:02:08.398 CXX test/cpp_headers/vfio_user_spec.o 00:02:08.398 CXX test/cpp_headers/vhost.o 00:02:08.398 CXX test/cpp_headers/vmd.o 00:02:08.398 CXX test/cpp_headers/xor.o 00:02:08.398 CXX test/cpp_headers/zipf.o 00:02:08.962 LINK memory_ut 00:02:09.219 LINK cuse 00:02:09.219 LINK iscsi_fuzz 00:02:11.748 LINK esnap 00:02:12.006 00:02:12.006 real 0m45.233s 00:02:12.006 user 9m37.024s 00:02:12.006 sys 2m9.129s 00:02:12.006 15:25:51 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:12.006 15:25:51 -- common/autotest_common.sh@10 -- $ set +x 00:02:12.006 ************************************ 00:02:12.006 END TEST make 00:02:12.006 ************************************ 00:02:12.265 15:25:51 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:12.265 15:25:51 -- nvmf/common.sh@7 -- # uname -s 00:02:12.265 15:25:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:12.265 15:25:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:12.265 15:25:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:12.265 15:25:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:12.265 15:25:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:12.265 15:25:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:12.265 15:25:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:12.265 15:25:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:12.265 15:25:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:12.265 15:25:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:12.265 15:25:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:12.265 15:25:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:12.265 15:25:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:12.265 15:25:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:12.265 15:25:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:12.265 15:25:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:12.265 15:25:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:12.265 15:25:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:12.265 15:25:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:12.265 15:25:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.265 15:25:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.265 15:25:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.265 15:25:51 -- paths/export.sh@5 -- # export PATH 00:02:12.265 15:25:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:12.265 15:25:51 -- nvmf/common.sh@46 -- # : 0 00:02:12.265 15:25:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:12.265 15:25:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:12.265 15:25:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:12.265 15:25:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:12.265 15:25:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:12.265 15:25:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:12.265 15:25:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:12.265 15:25:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:12.265 15:25:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:12.265 15:25:51 -- spdk/autotest.sh@32 -- # uname -s 00:02:12.265 15:25:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:12.265 15:25:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:12.265 15:25:51 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:12.265 15:25:51 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:12.265 15:25:51 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:12.265 15:25:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:12.265 15:25:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:12.265 15:25:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:12.265 15:25:51 -- spdk/autotest.sh@48 -- # udevadm_pid=1961096 00:02:12.265 15:25:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:12.265 15:25:51 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:12.265 15:25:51 -- spdk/autotest.sh@54 -- # echo 1961098 00:02:12.265 15:25:51 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:12.265 15:25:51 -- spdk/autotest.sh@56 -- # echo 1961099 00:02:12.265 15:25:51 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:02:12.265 15:25:51 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:12.265 15:25:51 -- spdk/autotest.sh@60 -- # echo 1961100 00:02:12.265 15:25:51 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:12.265 15:25:51 -- spdk/autotest.sh@62 -- # echo 1961101 00:02:12.265 15:25:51 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:02:12.265 15:25:51 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:12.265 15:25:51 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:12.265 15:25:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:12.265 15:25:51 -- common/autotest_common.sh@10 -- # set +x 00:02:12.265 15:25:51 -- spdk/autotest.sh@70 -- # create_test_list 00:02:12.265 15:25:51 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:12.265 15:25:51 -- common/autotest_common.sh@10 -- # set +x 00:02:12.265 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:12.265 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:12.265 15:25:51 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:12.265 15:25:51 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:12.265 15:25:51 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:12.265 15:25:51 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:12.265 15:25:51 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:12.265 15:25:51 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:12.265 15:25:51 -- common/autotest_common.sh@1440 -- # uname 00:02:12.265 15:25:51 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:12.265 15:25:51 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:12.265 15:25:51 -- common/autotest_common.sh@1460 -- # uname 00:02:12.265 15:25:51 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:12.265 15:25:51 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:12.265 15:25:51 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:12.266 15:25:51 -- spdk/autotest.sh@83 -- # hash lcov 00:02:12.266 15:25:51 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:12.266 15:25:51 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:12.266 --rc lcov_branch_coverage=1 00:02:12.266 --rc lcov_function_coverage=1 00:02:12.266 --rc genhtml_branch_coverage=1 00:02:12.266 --rc genhtml_function_coverage=1 00:02:12.266 --rc genhtml_legend=1 00:02:12.266 --rc geninfo_all_blocks=1 00:02:12.266 ' 00:02:12.266 15:25:51 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:12.266 --rc lcov_branch_coverage=1 00:02:12.266 --rc lcov_function_coverage=1 00:02:12.266 --rc genhtml_branch_coverage=1 00:02:12.266 --rc genhtml_function_coverage=1 00:02:12.266 --rc genhtml_legend=1 00:02:12.266 --rc geninfo_all_blocks=1 00:02:12.266 ' 00:02:12.266 15:25:51 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:12.266 --rc lcov_branch_coverage=1 00:02:12.266 --rc lcov_function_coverage=1 00:02:12.266 --rc genhtml_branch_coverage=1 00:02:12.266 --rc genhtml_function_coverage=1 00:02:12.266 --rc genhtml_legend=1 00:02:12.266 --rc geninfo_all_blocks=1 00:02:12.266 --no-external' 00:02:12.266 15:25:51 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:12.266 --rc lcov_branch_coverage=1 00:02:12.266 --rc lcov_function_coverage=1 00:02:12.266 --rc genhtml_branch_coverage=1 00:02:12.266 --rc genhtml_function_coverage=1 00:02:12.266 --rc genhtml_legend=1 00:02:12.266 --rc geninfo_all_blocks=1 00:02:12.266 --no-external' 00:02:12.266 15:25:51 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:12.266 lcov: LCOV version 1.14 00:02:12.266 15:25:51 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:14.168 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:14.168 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:14.168 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:14.168 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:14.168 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:14.168 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:14.168 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:14.168 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:14.168 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:14.168 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:14.168 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:14.168 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:14.168 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:14.168 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:14.168 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:14.168 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:14.168 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:14.168 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:14.168 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:14.168 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:14.168 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:14.169 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:14.169 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:14.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:14.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:14.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:14.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:14.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:14.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:14.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:14.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:14.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:14.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:14.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:14.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:14.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:14.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:14.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:14.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:14.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:14.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:14.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:14.170 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:14.170 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:29.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:29.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:29.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:29.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:29.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:29.038 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:43.911 15:26:22 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:43.911 15:26:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:43.911 15:26:22 -- common/autotest_common.sh@10 -- # set +x 00:02:43.911 15:26:22 -- spdk/autotest.sh@102 -- # rm -f 00:02:43.911 15:26:22 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:44.476 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:44.476 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:44.476 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:44.476 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:44.476 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:44.476 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:44.476 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:44.476 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:44.476 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:44.476 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:44.734 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:44.734 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:44.734 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:44.734 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:44.734 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:44.734 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:44.734 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:44.734 15:26:24 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:02:44.734 15:26:24 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:44.734 15:26:24 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:44.734 15:26:24 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:44.734 15:26:24 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:44.734 15:26:24 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:44.734 15:26:24 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:44.734 15:26:24 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:44.734 15:26:24 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:44.734 15:26:24 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:02:44.734 15:26:24 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:02:44.734 15:26:24 -- spdk/autotest.sh@121 -- # grep -v p 00:02:44.735 15:26:24 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:02:44.735 15:26:24 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:02:44.735 15:26:24 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:02:44.735 15:26:24 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:02:44.735 15:26:24 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:44.735 No valid GPT data, bailing 00:02:44.735 15:26:24 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:44.735 15:26:24 -- scripts/common.sh@393 -- # pt= 00:02:44.735 15:26:24 -- scripts/common.sh@394 -- # return 1 00:02:44.735 15:26:24 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:44.735 1+0 records in 00:02:44.735 1+0 records out 00:02:44.735 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00248584 s, 422 MB/s 00:02:44.735 15:26:24 -- spdk/autotest.sh@129 -- # sync 00:02:44.735 15:26:24 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:44.735 15:26:24 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:44.735 15:26:24 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:46.632 15:26:25 -- spdk/autotest.sh@135 -- # uname -s 00:02:46.632 15:26:25 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:02:46.632 15:26:25 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:46.633 15:26:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:46.633 15:26:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:46.633 15:26:25 -- common/autotest_common.sh@10 -- # set +x 00:02:46.633 ************************************ 00:02:46.633 START TEST setup.sh 00:02:46.633 ************************************ 00:02:46.633 15:26:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:02:46.633 * Looking for test storage... 00:02:46.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:46.633 15:26:25 -- setup/test-setup.sh@10 -- # uname -s 00:02:46.633 15:26:25 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:02:46.633 15:26:25 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:46.633 15:26:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:46.633 15:26:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:46.633 15:26:25 -- common/autotest_common.sh@10 -- # set +x 00:02:46.633 ************************************ 00:02:46.633 START TEST acl 00:02:46.633 ************************************ 00:02:46.633 15:26:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:02:46.890 * Looking for test storage... 00:02:46.890 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:46.890 15:26:26 -- setup/acl.sh@10 -- # get_zoned_devs 00:02:46.890 15:26:26 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:02:46.890 15:26:26 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:02:46.890 15:26:26 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:02:46.890 15:26:26 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:02:46.890 15:26:26 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:02:46.890 15:26:26 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:02:46.891 15:26:26 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:46.891 15:26:26 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:02:46.891 15:26:26 -- setup/acl.sh@12 -- # devs=() 00:02:46.891 15:26:26 -- setup/acl.sh@12 -- # declare -a devs 00:02:46.891 15:26:26 -- setup/acl.sh@13 -- # drivers=() 00:02:46.891 15:26:26 -- setup/acl.sh@13 -- # declare -A drivers 00:02:46.891 15:26:26 -- setup/acl.sh@51 -- # setup reset 00:02:46.891 15:26:26 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:46.891 15:26:26 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:48.264 15:26:27 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:48.264 15:26:27 -- setup/acl.sh@16 -- # local dev driver 00:02:48.264 15:26:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:48.264 15:26:27 -- setup/acl.sh@15 -- # setup output status 00:02:48.264 15:26:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:48.264 15:26:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:49.636 Hugepages 00:02:49.636 node hugesize free / total 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # continue 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # continue 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # continue 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 00:02:49.636 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # continue 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # continue 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # continue 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # continue 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # continue 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # continue 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # continue 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # continue 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # continue 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # continue 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # continue 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # continue 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # continue 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # continue 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # continue 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # continue 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # continue 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 15:26:28 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:49.636 15:26:28 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:02:49.636 15:26:28 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:49.636 15:26:28 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:49.636 15:26:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:49.636 15:26:28 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:02:49.636 15:26:28 -- setup/acl.sh@54 -- # run_test denied denied 00:02:49.636 15:26:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:49.636 15:26:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:49.636 15:26:28 -- common/autotest_common.sh@10 -- # set +x 00:02:49.636 ************************************ 00:02:49.636 START TEST denied 00:02:49.636 ************************************ 00:02:49.636 15:26:28 -- common/autotest_common.sh@1104 -- # denied 00:02:49.636 15:26:28 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:02:49.636 15:26:28 -- setup/acl.sh@38 -- # setup output config 00:02:49.636 15:26:28 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:02:49.636 15:26:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:49.636 15:26:28 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:51.095 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:02:51.096 15:26:30 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:02:51.096 15:26:30 -- setup/acl.sh@28 -- # local dev driver 00:02:51.096 15:26:30 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:51.096 15:26:30 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:02:51.096 15:26:30 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:02:51.096 15:26:30 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:51.096 15:26:30 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:51.096 15:26:30 -- setup/acl.sh@41 -- # setup reset 00:02:51.096 15:26:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:51.096 15:26:30 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:53.622 00:02:53.622 real 0m3.783s 00:02:53.622 user 0m1.062s 00:02:53.622 sys 0m1.823s 00:02:53.622 15:26:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:53.622 15:26:32 -- common/autotest_common.sh@10 -- # set +x 00:02:53.622 ************************************ 00:02:53.622 END TEST denied 00:02:53.622 ************************************ 00:02:53.622 15:26:32 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:53.622 15:26:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:53.622 15:26:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:53.622 15:26:32 -- common/autotest_common.sh@10 -- # set +x 00:02:53.622 ************************************ 00:02:53.622 START TEST allowed 00:02:53.622 ************************************ 00:02:53.622 15:26:32 -- common/autotest_common.sh@1104 -- # allowed 00:02:53.622 15:26:32 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:02:53.622 15:26:32 -- setup/acl.sh@45 -- # setup output config 00:02:53.622 15:26:32 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:02:53.622 15:26:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:53.622 15:26:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:02:55.518 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:55.518 15:26:34 -- setup/acl.sh@47 -- # verify 00:02:55.518 15:26:34 -- setup/acl.sh@28 -- # local dev driver 00:02:55.518 15:26:34 -- setup/acl.sh@48 -- # setup reset 00:02:55.518 15:26:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:55.518 15:26:34 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:57.424 00:02:57.424 real 0m3.784s 00:02:57.424 user 0m0.975s 00:02:57.424 sys 0m1.650s 00:02:57.424 15:26:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:57.424 15:26:36 -- common/autotest_common.sh@10 -- # set +x 00:02:57.424 ************************************ 00:02:57.425 END TEST allowed 00:02:57.425 ************************************ 00:02:57.425 00:02:57.425 real 0m10.301s 00:02:57.425 user 0m3.081s 00:02:57.425 sys 0m5.252s 00:02:57.425 15:26:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:57.425 15:26:36 -- common/autotest_common.sh@10 -- # set +x 00:02:57.425 ************************************ 00:02:57.425 END TEST acl 00:02:57.425 ************************************ 00:02:57.425 15:26:36 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:57.425 15:26:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:57.425 15:26:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:57.425 15:26:36 -- common/autotest_common.sh@10 -- # set +x 00:02:57.425 ************************************ 00:02:57.425 START TEST hugepages 00:02:57.425 ************************************ 00:02:57.425 15:26:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:02:57.425 * Looking for test storage... 00:02:57.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:02:57.425 15:26:36 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:57.425 15:26:36 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:57.425 15:26:36 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:57.425 15:26:36 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:57.425 15:26:36 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:57.425 15:26:36 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:57.425 15:26:36 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:57.425 15:26:36 -- setup/common.sh@18 -- # local node= 00:02:57.425 15:26:36 -- setup/common.sh@19 -- # local var val 00:02:57.425 15:26:36 -- setup/common.sh@20 -- # local mem_f mem 00:02:57.425 15:26:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:57.425 15:26:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:57.425 15:26:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:57.425 15:26:36 -- setup/common.sh@28 -- # mapfile -t mem 00:02:57.425 15:26:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 43604568 kB' 'MemAvailable: 47158624 kB' 'Buffers: 2704 kB' 'Cached: 10224592 kB' 'SwapCached: 0 kB' 'Active: 7323888 kB' 'Inactive: 3520476 kB' 'Active(anon): 6893004 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 620440 kB' 'Mapped: 179312 kB' 'Shmem: 6275936 kB' 'KReclaimable: 192640 kB' 'Slab: 573280 kB' 'SReclaimable: 192640 kB' 'SUnreclaim: 380640 kB' 'KernelStack: 12992 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562320 kB' 'Committed_AS: 8027228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196484 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.425 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.425 15:26:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # continue 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:57.426 15:26:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:57.426 15:26:36 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:57.426 15:26:36 -- setup/common.sh@33 -- # echo 2048 00:02:57.426 15:26:36 -- setup/common.sh@33 -- # return 0 00:02:57.426 15:26:36 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:57.426 15:26:36 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:57.426 15:26:36 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:57.426 15:26:36 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:57.426 15:26:36 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:57.426 15:26:36 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:57.426 15:26:36 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:57.426 15:26:36 -- setup/hugepages.sh@207 -- # get_nodes 00:02:57.426 15:26:36 -- setup/hugepages.sh@27 -- # local node 00:02:57.426 15:26:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.426 15:26:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:57.426 15:26:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:57.426 15:26:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:57.426 15:26:36 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:57.426 15:26:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:57.426 15:26:36 -- setup/hugepages.sh@208 -- # clear_hp 00:02:57.426 15:26:36 -- setup/hugepages.sh@37 -- # local node hp 00:02:57.426 15:26:36 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:57.426 15:26:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:57.426 15:26:36 -- setup/hugepages.sh@41 -- # echo 0 00:02:57.426 15:26:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:57.426 15:26:36 -- setup/hugepages.sh@41 -- # echo 0 00:02:57.426 15:26:36 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:57.426 15:26:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:57.426 15:26:36 -- setup/hugepages.sh@41 -- # echo 0 00:02:57.426 15:26:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:57.426 15:26:36 -- setup/hugepages.sh@41 -- # echo 0 00:02:57.426 15:26:36 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:57.426 15:26:36 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:57.426 15:26:36 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:57.426 15:26:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:57.426 15:26:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:57.426 15:26:36 -- common/autotest_common.sh@10 -- # set +x 00:02:57.426 ************************************ 00:02:57.426 START TEST default_setup 00:02:57.426 ************************************ 00:02:57.426 15:26:36 -- common/autotest_common.sh@1104 -- # default_setup 00:02:57.426 15:26:36 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:57.426 15:26:36 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:57.426 15:26:36 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:57.426 15:26:36 -- setup/hugepages.sh@51 -- # shift 00:02:57.426 15:26:36 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:57.426 15:26:36 -- setup/hugepages.sh@52 -- # local node_ids 00:02:57.426 15:26:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:57.426 15:26:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:57.426 15:26:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:57.426 15:26:36 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:57.426 15:26:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:57.426 15:26:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:57.426 15:26:36 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:57.426 15:26:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:57.426 15:26:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:57.426 15:26:36 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:57.426 15:26:36 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:57.426 15:26:36 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:57.426 15:26:36 -- setup/hugepages.sh@73 -- # return 0 00:02:57.426 15:26:36 -- setup/hugepages.sh@137 -- # setup output 00:02:57.426 15:26:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:57.426 15:26:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:58.357 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:58.357 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:58.357 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:58.357 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:58.357 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:58.357 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:58.357 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:58.357 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:58.357 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:02:58.357 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:02:58.357 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:02:58.357 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:02:58.615 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:02:58.615 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:02:58.615 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:02:58.615 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:02:59.553 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:02:59.553 15:26:38 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:59.553 15:26:38 -- setup/hugepages.sh@89 -- # local node 00:02:59.553 15:26:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:59.553 15:26:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:59.553 15:26:38 -- setup/hugepages.sh@92 -- # local surp 00:02:59.553 15:26:38 -- setup/hugepages.sh@93 -- # local resv 00:02:59.553 15:26:38 -- setup/hugepages.sh@94 -- # local anon 00:02:59.553 15:26:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:59.553 15:26:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:59.553 15:26:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:59.553 15:26:38 -- setup/common.sh@18 -- # local node= 00:02:59.553 15:26:38 -- setup/common.sh@19 -- # local var val 00:02:59.553 15:26:38 -- setup/common.sh@20 -- # local mem_f mem 00:02:59.553 15:26:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.553 15:26:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.553 15:26:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.553 15:26:38 -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.553 15:26:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.553 15:26:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45709340 kB' 'MemAvailable: 49263380 kB' 'Buffers: 2704 kB' 'Cached: 10224688 kB' 'SwapCached: 0 kB' 'Active: 7340984 kB' 'Inactive: 3520476 kB' 'Active(anon): 6910100 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637352 kB' 'Mapped: 179340 kB' 'Shmem: 6276032 kB' 'KReclaimable: 192608 kB' 'Slab: 573160 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380552 kB' 'KernelStack: 12800 kB' 'PageTables: 8220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8047500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.553 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.553 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.554 15:26:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:59.554 15:26:38 -- setup/common.sh@33 -- # echo 0 00:02:59.554 15:26:38 -- setup/common.sh@33 -- # return 0 00:02:59.554 15:26:38 -- setup/hugepages.sh@97 -- # anon=0 00:02:59.554 15:26:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:59.554 15:26:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.554 15:26:38 -- setup/common.sh@18 -- # local node= 00:02:59.554 15:26:38 -- setup/common.sh@19 -- # local var val 00:02:59.554 15:26:38 -- setup/common.sh@20 -- # local mem_f mem 00:02:59.554 15:26:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.554 15:26:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.554 15:26:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.554 15:26:38 -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.554 15:26:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.554 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45710196 kB' 'MemAvailable: 49264236 kB' 'Buffers: 2704 kB' 'Cached: 10224692 kB' 'SwapCached: 0 kB' 'Active: 7341744 kB' 'Inactive: 3520476 kB' 'Active(anon): 6910860 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638208 kB' 'Mapped: 179340 kB' 'Shmem: 6276036 kB' 'KReclaimable: 192608 kB' 'Slab: 573136 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380528 kB' 'KernelStack: 12800 kB' 'PageTables: 8228 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8047884 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.555 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.555 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.556 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.556 15:26:38 -- setup/common.sh@33 -- # echo 0 00:02:59.556 15:26:38 -- setup/common.sh@33 -- # return 0 00:02:59.556 15:26:38 -- setup/hugepages.sh@99 -- # surp=0 00:02:59.556 15:26:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:59.556 15:26:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:59.556 15:26:38 -- setup/common.sh@18 -- # local node= 00:02:59.556 15:26:38 -- setup/common.sh@19 -- # local var val 00:02:59.556 15:26:38 -- setup/common.sh@20 -- # local mem_f mem 00:02:59.556 15:26:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.556 15:26:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.556 15:26:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.556 15:26:38 -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.556 15:26:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.556 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45710912 kB' 'MemAvailable: 49264952 kB' 'Buffers: 2704 kB' 'Cached: 10224708 kB' 'SwapCached: 0 kB' 'Active: 7341048 kB' 'Inactive: 3520476 kB' 'Active(anon): 6910164 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637500 kB' 'Mapped: 179332 kB' 'Shmem: 6276052 kB' 'KReclaimable: 192608 kB' 'Slab: 573164 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380556 kB' 'KernelStack: 12848 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8047900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.557 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.557 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.558 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.558 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:59.558 15:26:38 -- setup/common.sh@33 -- # echo 0 00:02:59.558 15:26:38 -- setup/common.sh@33 -- # return 0 00:02:59.558 15:26:38 -- setup/hugepages.sh@100 -- # resv=0 00:02:59.558 15:26:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:59.558 nr_hugepages=1024 00:02:59.558 15:26:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:59.558 resv_hugepages=0 00:02:59.558 15:26:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:59.558 surplus_hugepages=0 00:02:59.558 15:26:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:59.558 anon_hugepages=0 00:02:59.558 15:26:38 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:59.558 15:26:38 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:59.558 15:26:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:59.558 15:26:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:59.559 15:26:38 -- setup/common.sh@18 -- # local node= 00:02:59.559 15:26:38 -- setup/common.sh@19 -- # local var val 00:02:59.559 15:26:38 -- setup/common.sh@20 -- # local mem_f mem 00:02:59.559 15:26:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.559 15:26:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:59.559 15:26:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:59.559 15:26:38 -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.559 15:26:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.559 15:26:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45711056 kB' 'MemAvailable: 49265096 kB' 'Buffers: 2704 kB' 'Cached: 10224720 kB' 'SwapCached: 0 kB' 'Active: 7340992 kB' 'Inactive: 3520476 kB' 'Active(anon): 6910108 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 637440 kB' 'Mapped: 179332 kB' 'Shmem: 6276064 kB' 'KReclaimable: 192608 kB' 'Slab: 573164 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380556 kB' 'KernelStack: 12848 kB' 'PageTables: 8348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8047916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196532 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.559 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.559 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.560 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.560 15:26:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:59.561 15:26:38 -- setup/common.sh@33 -- # echo 1024 00:02:59.561 15:26:38 -- setup/common.sh@33 -- # return 0 00:02:59.561 15:26:38 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:59.561 15:26:38 -- setup/hugepages.sh@112 -- # get_nodes 00:02:59.561 15:26:38 -- setup/hugepages.sh@27 -- # local node 00:02:59.561 15:26:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.561 15:26:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:59.561 15:26:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:59.561 15:26:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:59.561 15:26:38 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:59.561 15:26:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:59.561 15:26:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:59.561 15:26:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:59.561 15:26:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:59.561 15:26:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:59.561 15:26:38 -- setup/common.sh@18 -- # local node=0 00:02:59.561 15:26:38 -- setup/common.sh@19 -- # local var val 00:02:59.561 15:26:38 -- setup/common.sh@20 -- # local mem_f mem 00:02:59.561 15:26:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:59.561 15:26:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:59.561 15:26:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:59.561 15:26:38 -- setup/common.sh@28 -- # mapfile -t mem 00:02:59.561 15:26:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20712388 kB' 'MemUsed: 12117496 kB' 'SwapCached: 0 kB' 'Active: 5559024 kB' 'Inactive: 3242324 kB' 'Active(anon): 5445572 kB' 'Inactive(anon): 0 kB' 'Active(file): 113452 kB' 'Inactive(file): 3242324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8338384 kB' 'Mapped: 42560 kB' 'AnonPages: 466204 kB' 'Shmem: 4982608 kB' 'KernelStack: 7208 kB' 'PageTables: 4812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98788 kB' 'Slab: 336504 kB' 'SReclaimable: 98788 kB' 'SUnreclaim: 237716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.561 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.561 15:26:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # continue 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # IFS=': ' 00:02:59.562 15:26:38 -- setup/common.sh@31 -- # read -r var val _ 00:02:59.562 15:26:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:59.562 15:26:38 -- setup/common.sh@33 -- # echo 0 00:02:59.562 15:26:38 -- setup/common.sh@33 -- # return 0 00:02:59.562 15:26:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:59.562 15:26:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:59.562 15:26:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:59.562 15:26:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:59.562 15:26:38 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:59.562 node0=1024 expecting 1024 00:02:59.562 15:26:38 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:59.562 00:02:59.562 real 0m2.452s 00:02:59.562 user 0m0.676s 00:02:59.562 sys 0m0.897s 00:02:59.562 15:26:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:02:59.562 15:26:38 -- common/autotest_common.sh@10 -- # set +x 00:02:59.562 ************************************ 00:02:59.562 END TEST default_setup 00:02:59.562 ************************************ 00:02:59.562 15:26:38 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:59.562 15:26:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:02:59.562 15:26:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:02:59.562 15:26:38 -- common/autotest_common.sh@10 -- # set +x 00:02:59.562 ************************************ 00:02:59.562 START TEST per_node_1G_alloc 00:02:59.562 ************************************ 00:02:59.562 15:26:38 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:02:59.562 15:26:38 -- setup/hugepages.sh@143 -- # local IFS=, 00:02:59.562 15:26:38 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:59.562 15:26:38 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:59.562 15:26:38 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:59.562 15:26:38 -- setup/hugepages.sh@51 -- # shift 00:02:59.562 15:26:38 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:59.562 15:26:38 -- setup/hugepages.sh@52 -- # local node_ids 00:02:59.562 15:26:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:59.563 15:26:38 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:59.563 15:26:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:59.563 15:26:38 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:59.563 15:26:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:59.563 15:26:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:59.563 15:26:38 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:59.563 15:26:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:59.563 15:26:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:59.563 15:26:38 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:59.563 15:26:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:59.563 15:26:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:59.563 15:26:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:59.563 15:26:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:59.563 15:26:38 -- setup/hugepages.sh@73 -- # return 0 00:02:59.563 15:26:38 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:59.563 15:26:38 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:59.563 15:26:38 -- setup/hugepages.sh@146 -- # setup output 00:02:59.563 15:26:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:59.563 15:26:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:00.938 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:00.938 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:00.938 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:00.938 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:00.938 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:00.938 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:00.938 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:00.938 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:00.938 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:00.938 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:00.938 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:00.938 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:00.938 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:00.938 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:00.938 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:00.938 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:00.938 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:00.938 15:26:40 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:00.938 15:26:40 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:00.938 15:26:40 -- setup/hugepages.sh@89 -- # local node 00:03:00.938 15:26:40 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:00.938 15:26:40 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:00.938 15:26:40 -- setup/hugepages.sh@92 -- # local surp 00:03:00.938 15:26:40 -- setup/hugepages.sh@93 -- # local resv 00:03:00.938 15:26:40 -- setup/hugepages.sh@94 -- # local anon 00:03:00.938 15:26:40 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:00.938 15:26:40 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:00.938 15:26:40 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:00.938 15:26:40 -- setup/common.sh@18 -- # local node= 00:03:00.938 15:26:40 -- setup/common.sh@19 -- # local var val 00:03:00.938 15:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.938 15:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.938 15:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.938 15:26:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.938 15:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.938 15:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45715576 kB' 'MemAvailable: 49269616 kB' 'Buffers: 2704 kB' 'Cached: 10224768 kB' 'SwapCached: 0 kB' 'Active: 7341700 kB' 'Inactive: 3520476 kB' 'Active(anon): 6910816 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638548 kB' 'Mapped: 179344 kB' 'Shmem: 6276112 kB' 'KReclaimable: 192608 kB' 'Slab: 573028 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380420 kB' 'KernelStack: 12848 kB' 'PageTables: 8292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8048080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196468 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.938 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.938 15:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:00.939 15:26:40 -- setup/common.sh@33 -- # echo 0 00:03:00.939 15:26:40 -- setup/common.sh@33 -- # return 0 00:03:00.939 15:26:40 -- setup/hugepages.sh@97 -- # anon=0 00:03:00.939 15:26:40 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:00.939 15:26:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:00.939 15:26:40 -- setup/common.sh@18 -- # local node= 00:03:00.939 15:26:40 -- setup/common.sh@19 -- # local var val 00:03:00.939 15:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.939 15:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.939 15:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.939 15:26:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.939 15:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.939 15:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45715844 kB' 'MemAvailable: 49269884 kB' 'Buffers: 2704 kB' 'Cached: 10224768 kB' 'SwapCached: 0 kB' 'Active: 7342136 kB' 'Inactive: 3520476 kB' 'Active(anon): 6911252 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638968 kB' 'Mapped: 179272 kB' 'Shmem: 6276112 kB' 'KReclaimable: 192608 kB' 'Slab: 572984 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380376 kB' 'KernelStack: 12816 kB' 'PageTables: 8172 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8048092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196436 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.939 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.939 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:00.940 15:26:40 -- setup/common.sh@33 -- # echo 0 00:03:00.940 15:26:40 -- setup/common.sh@33 -- # return 0 00:03:00.940 15:26:40 -- setup/hugepages.sh@99 -- # surp=0 00:03:00.940 15:26:40 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:00.940 15:26:40 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:00.940 15:26:40 -- setup/common.sh@18 -- # local node= 00:03:00.940 15:26:40 -- setup/common.sh@19 -- # local var val 00:03:00.940 15:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.940 15:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.940 15:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.940 15:26:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.940 15:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.940 15:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45715844 kB' 'MemAvailable: 49269884 kB' 'Buffers: 2704 kB' 'Cached: 10224780 kB' 'SwapCached: 0 kB' 'Active: 7341748 kB' 'Inactive: 3520476 kB' 'Active(anon): 6910864 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638520 kB' 'Mapped: 179268 kB' 'Shmem: 6276124 kB' 'KReclaimable: 192608 kB' 'Slab: 572984 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380376 kB' 'KernelStack: 12864 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8048104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196436 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.940 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.940 15:26:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.941 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.941 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:00.941 15:26:40 -- setup/common.sh@33 -- # echo 0 00:03:00.941 15:26:40 -- setup/common.sh@33 -- # return 0 00:03:00.941 15:26:40 -- setup/hugepages.sh@100 -- # resv=0 00:03:00.941 15:26:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:00.942 nr_hugepages=1024 00:03:00.942 15:26:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:00.942 resv_hugepages=0 00:03:00.942 15:26:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:00.942 surplus_hugepages=0 00:03:00.942 15:26:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:00.942 anon_hugepages=0 00:03:00.942 15:26:40 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:00.942 15:26:40 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:00.942 15:26:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:00.942 15:26:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:00.942 15:26:40 -- setup/common.sh@18 -- # local node= 00:03:00.942 15:26:40 -- setup/common.sh@19 -- # local var val 00:03:00.942 15:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:00.942 15:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:00.942 15:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:00.942 15:26:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:00.942 15:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:00.942 15:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45723696 kB' 'MemAvailable: 49277736 kB' 'Buffers: 2704 kB' 'Cached: 10224796 kB' 'SwapCached: 0 kB' 'Active: 7341648 kB' 'Inactive: 3520476 kB' 'Active(anon): 6910764 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638364 kB' 'Mapped: 179340 kB' 'Shmem: 6276140 kB' 'KReclaimable: 192608 kB' 'Slab: 573012 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380404 kB' 'KernelStack: 12880 kB' 'PageTables: 8332 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8048120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196436 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.942 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.942 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # continue 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:00.943 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:00.943 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:00.943 15:26:40 -- setup/common.sh@33 -- # echo 1024 00:03:00.943 15:26:40 -- setup/common.sh@33 -- # return 0 00:03:00.943 15:26:40 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:00.943 15:26:40 -- setup/hugepages.sh@112 -- # get_nodes 00:03:00.943 15:26:40 -- setup/hugepages.sh@27 -- # local node 00:03:00.943 15:26:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.943 15:26:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:00.943 15:26:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:00.943 15:26:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:01.202 15:26:40 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:01.202 15:26:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:01.202 15:26:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.202 15:26:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.202 15:26:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:01.202 15:26:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.202 15:26:40 -- setup/common.sh@18 -- # local node=0 00:03:01.202 15:26:40 -- setup/common.sh@19 -- # local var val 00:03:01.202 15:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.202 15:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.202 15:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:01.202 15:26:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:01.202 15:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.202 15:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.202 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.202 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.202 15:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21763120 kB' 'MemUsed: 11066764 kB' 'SwapCached: 0 kB' 'Active: 5559444 kB' 'Inactive: 3242324 kB' 'Active(anon): 5445992 kB' 'Inactive(anon): 0 kB' 'Active(file): 113452 kB' 'Inactive(file): 3242324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8338388 kB' 'Mapped: 42568 kB' 'AnonPages: 466816 kB' 'Shmem: 4982612 kB' 'KernelStack: 7192 kB' 'PageTables: 4796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98788 kB' 'Slab: 336496 kB' 'SReclaimable: 98788 kB' 'SUnreclaim: 237708 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:01.202 15:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.202 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.202 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.202 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.202 15:26:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.202 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.202 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.202 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.202 15:26:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.202 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.202 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.202 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.202 15:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.202 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.202 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.202 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.202 15:26:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.202 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.202 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.202 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.202 15:26:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.202 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.202 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.202 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.202 15:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.202 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.202 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@33 -- # echo 0 00:03:01.203 15:26:40 -- setup/common.sh@33 -- # return 0 00:03:01.203 15:26:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.203 15:26:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:01.203 15:26:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:01.203 15:26:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:01.203 15:26:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:01.203 15:26:40 -- setup/common.sh@18 -- # local node=1 00:03:01.203 15:26:40 -- setup/common.sh@19 -- # local var val 00:03:01.203 15:26:40 -- setup/common.sh@20 -- # local mem_f mem 00:03:01.203 15:26:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:01.203 15:26:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:01.203 15:26:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:01.203 15:26:40 -- setup/common.sh@28 -- # mapfile -t mem 00:03:01.203 15:26:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 23959028 kB' 'MemUsed: 3752824 kB' 'SwapCached: 0 kB' 'Active: 1782320 kB' 'Inactive: 278152 kB' 'Active(anon): 1464888 kB' 'Inactive(anon): 0 kB' 'Active(file): 317432 kB' 'Inactive(file): 278152 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1889140 kB' 'Mapped: 136772 kB' 'AnonPages: 171640 kB' 'Shmem: 1293556 kB' 'KernelStack: 5688 kB' 'PageTables: 3536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93820 kB' 'Slab: 236508 kB' 'SReclaimable: 93820 kB' 'SUnreclaim: 142688 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.203 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.203 15:26:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # continue 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # IFS=': ' 00:03:01.204 15:26:40 -- setup/common.sh@31 -- # read -r var val _ 00:03:01.204 15:26:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:01.204 15:26:40 -- setup/common.sh@33 -- # echo 0 00:03:01.204 15:26:40 -- setup/common.sh@33 -- # return 0 00:03:01.204 15:26:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:01.204 15:26:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.204 15:26:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.204 15:26:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.204 15:26:40 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:01.204 node0=512 expecting 512 00:03:01.204 15:26:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:01.204 15:26:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:01.204 15:26:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:01.204 15:26:40 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:01.204 node1=512 expecting 512 00:03:01.204 15:26:40 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:01.204 00:03:01.204 real 0m1.448s 00:03:01.204 user 0m0.596s 00:03:01.204 sys 0m0.816s 00:03:01.204 15:26:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:01.204 15:26:40 -- common/autotest_common.sh@10 -- # set +x 00:03:01.204 ************************************ 00:03:01.204 END TEST per_node_1G_alloc 00:03:01.204 ************************************ 00:03:01.204 15:26:40 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:01.204 15:26:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:01.204 15:26:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:01.204 15:26:40 -- common/autotest_common.sh@10 -- # set +x 00:03:01.204 ************************************ 00:03:01.204 START TEST even_2G_alloc 00:03:01.204 ************************************ 00:03:01.204 15:26:40 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:01.204 15:26:40 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:01.204 15:26:40 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:01.204 15:26:40 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:01.204 15:26:40 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:01.204 15:26:40 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:01.204 15:26:40 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:01.204 15:26:40 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:01.204 15:26:40 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:01.204 15:26:40 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:01.204 15:26:40 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:01.204 15:26:40 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:01.204 15:26:40 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:01.204 15:26:40 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:01.204 15:26:40 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:01.204 15:26:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:01.204 15:26:40 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:01.204 15:26:40 -- setup/hugepages.sh@83 -- # : 512 00:03:01.204 15:26:40 -- setup/hugepages.sh@84 -- # : 1 00:03:01.204 15:26:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:01.204 15:26:40 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:01.204 15:26:40 -- setup/hugepages.sh@83 -- # : 0 00:03:01.204 15:26:40 -- setup/hugepages.sh@84 -- # : 0 00:03:01.204 15:26:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:01.204 15:26:40 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:01.204 15:26:40 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:01.205 15:26:40 -- setup/hugepages.sh@153 -- # setup output 00:03:01.205 15:26:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:01.205 15:26:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:02.137 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:02.137 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:02.137 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:02.137 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:02.137 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:02.137 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:02.137 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:02.137 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:02.137 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:02.137 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:02.137 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:02.137 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:02.137 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:02.137 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:02.137 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:02.137 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:02.137 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:02.400 15:26:41 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:02.400 15:26:41 -- setup/hugepages.sh@89 -- # local node 00:03:02.400 15:26:41 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:02.400 15:26:41 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:02.400 15:26:41 -- setup/hugepages.sh@92 -- # local surp 00:03:02.400 15:26:41 -- setup/hugepages.sh@93 -- # local resv 00:03:02.400 15:26:41 -- setup/hugepages.sh@94 -- # local anon 00:03:02.400 15:26:41 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:02.400 15:26:41 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:02.400 15:26:41 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:02.400 15:26:41 -- setup/common.sh@18 -- # local node= 00:03:02.400 15:26:41 -- setup/common.sh@19 -- # local var val 00:03:02.400 15:26:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.400 15:26:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.400 15:26:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.400 15:26:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.400 15:26:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.400 15:26:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45717588 kB' 'MemAvailable: 49271628 kB' 'Buffers: 2704 kB' 'Cached: 10224860 kB' 'SwapCached: 0 kB' 'Active: 7345252 kB' 'Inactive: 3520476 kB' 'Active(anon): 6914368 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641556 kB' 'Mapped: 179840 kB' 'Shmem: 6276204 kB' 'KReclaimable: 192608 kB' 'Slab: 572820 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380212 kB' 'KernelStack: 12832 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8051904 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196548 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.400 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.400 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:02.401 15:26:41 -- setup/common.sh@33 -- # echo 0 00:03:02.401 15:26:41 -- setup/common.sh@33 -- # return 0 00:03:02.401 15:26:41 -- setup/hugepages.sh@97 -- # anon=0 00:03:02.401 15:26:41 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:02.401 15:26:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.401 15:26:41 -- setup/common.sh@18 -- # local node= 00:03:02.401 15:26:41 -- setup/common.sh@19 -- # local var val 00:03:02.401 15:26:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.401 15:26:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.401 15:26:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.401 15:26:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.401 15:26:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.401 15:26:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45721544 kB' 'MemAvailable: 49275584 kB' 'Buffers: 2704 kB' 'Cached: 10224860 kB' 'SwapCached: 0 kB' 'Active: 7348172 kB' 'Inactive: 3520476 kB' 'Active(anon): 6917288 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644516 kB' 'Mapped: 179780 kB' 'Shmem: 6276204 kB' 'KReclaimable: 192608 kB' 'Slab: 572820 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380212 kB' 'KernelStack: 12864 kB' 'PageTables: 8320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8054304 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196504 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.401 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.401 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.402 15:26:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.402 15:26:41 -- setup/common.sh@33 -- # echo 0 00:03:02.402 15:26:41 -- setup/common.sh@33 -- # return 0 00:03:02.402 15:26:41 -- setup/hugepages.sh@99 -- # surp=0 00:03:02.402 15:26:41 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:02.402 15:26:41 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:02.402 15:26:41 -- setup/common.sh@18 -- # local node= 00:03:02.402 15:26:41 -- setup/common.sh@19 -- # local var val 00:03:02.402 15:26:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.402 15:26:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.402 15:26:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.402 15:26:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.402 15:26:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.402 15:26:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.402 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45721900 kB' 'MemAvailable: 49275940 kB' 'Buffers: 2704 kB' 'Cached: 10224864 kB' 'SwapCached: 0 kB' 'Active: 7348108 kB' 'Inactive: 3520476 kB' 'Active(anon): 6917224 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 644480 kB' 'Mapped: 180260 kB' 'Shmem: 6276208 kB' 'KReclaimable: 192608 kB' 'Slab: 572820 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380212 kB' 'KernelStack: 12880 kB' 'PageTables: 8300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8054320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196504 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.403 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.403 15:26:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:02.404 15:26:41 -- setup/common.sh@33 -- # echo 0 00:03:02.404 15:26:41 -- setup/common.sh@33 -- # return 0 00:03:02.404 15:26:41 -- setup/hugepages.sh@100 -- # resv=0 00:03:02.404 15:26:41 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:02.404 nr_hugepages=1024 00:03:02.404 15:26:41 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:02.404 resv_hugepages=0 00:03:02.404 15:26:41 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:02.404 surplus_hugepages=0 00:03:02.404 15:26:41 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:02.404 anon_hugepages=0 00:03:02.404 15:26:41 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:02.404 15:26:41 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:02.404 15:26:41 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:02.404 15:26:41 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:02.404 15:26:41 -- setup/common.sh@18 -- # local node= 00:03:02.404 15:26:41 -- setup/common.sh@19 -- # local var val 00:03:02.404 15:26:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.404 15:26:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.404 15:26:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:02.404 15:26:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:02.404 15:26:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.404 15:26:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45724796 kB' 'MemAvailable: 49278836 kB' 'Buffers: 2704 kB' 'Cached: 10224888 kB' 'SwapCached: 0 kB' 'Active: 7344616 kB' 'Inactive: 3520476 kB' 'Active(anon): 6913732 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 640972 kB' 'Mapped: 179780 kB' 'Shmem: 6276232 kB' 'KReclaimable: 192608 kB' 'Slab: 572856 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380248 kB' 'KernelStack: 12880 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8051548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196500 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.404 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.404 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:02.405 15:26:41 -- setup/common.sh@33 -- # echo 1024 00:03:02.405 15:26:41 -- setup/common.sh@33 -- # return 0 00:03:02.405 15:26:41 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:02.405 15:26:41 -- setup/hugepages.sh@112 -- # get_nodes 00:03:02.405 15:26:41 -- setup/hugepages.sh@27 -- # local node 00:03:02.405 15:26:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.405 15:26:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:02.405 15:26:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:02.405 15:26:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:02.405 15:26:41 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:02.405 15:26:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:02.405 15:26:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:02.405 15:26:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:02.405 15:26:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:02.405 15:26:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.405 15:26:41 -- setup/common.sh@18 -- # local node=0 00:03:02.405 15:26:41 -- setup/common.sh@19 -- # local var val 00:03:02.405 15:26:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.405 15:26:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.405 15:26:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:02.405 15:26:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:02.405 15:26:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.405 15:26:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21755804 kB' 'MemUsed: 11074080 kB' 'SwapCached: 0 kB' 'Active: 5564916 kB' 'Inactive: 3242324 kB' 'Active(anon): 5451464 kB' 'Inactive(anon): 0 kB' 'Active(file): 113452 kB' 'Inactive(file): 3242324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8338392 kB' 'Mapped: 42572 kB' 'AnonPages: 472096 kB' 'Shmem: 4982616 kB' 'KernelStack: 7160 kB' 'PageTables: 4716 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98788 kB' 'Slab: 336432 kB' 'SReclaimable: 98788 kB' 'SUnreclaim: 237644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.405 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.405 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@33 -- # echo 0 00:03:02.406 15:26:41 -- setup/common.sh@33 -- # return 0 00:03:02.406 15:26:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:02.406 15:26:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:02.406 15:26:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:02.406 15:26:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:02.406 15:26:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:02.406 15:26:41 -- setup/common.sh@18 -- # local node=1 00:03:02.406 15:26:41 -- setup/common.sh@19 -- # local var val 00:03:02.406 15:26:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:02.406 15:26:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:02.406 15:26:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:02.406 15:26:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:02.406 15:26:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:02.406 15:26:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 23965020 kB' 'MemUsed: 3746832 kB' 'SwapCached: 0 kB' 'Active: 1782284 kB' 'Inactive: 278152 kB' 'Active(anon): 1464852 kB' 'Inactive(anon): 0 kB' 'Active(file): 317432 kB' 'Inactive(file): 278152 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1889216 kB' 'Mapped: 137688 kB' 'AnonPages: 171424 kB' 'Shmem: 1293632 kB' 'KernelStack: 5704 kB' 'PageTables: 3484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93820 kB' 'Slab: 236424 kB' 'SReclaimable: 93820 kB' 'SUnreclaim: 142604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.406 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.406 15:26:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # continue 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:02.407 15:26:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:02.407 15:26:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:02.407 15:26:41 -- setup/common.sh@33 -- # echo 0 00:03:02.407 15:26:41 -- setup/common.sh@33 -- # return 0 00:03:02.407 15:26:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:02.407 15:26:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:02.407 15:26:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.407 15:26:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.407 15:26:41 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:02.407 node0=512 expecting 512 00:03:02.407 15:26:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:02.407 15:26:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:02.407 15:26:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:02.407 15:26:41 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:02.407 node1=512 expecting 512 00:03:02.407 15:26:41 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:02.407 00:03:02.407 real 0m1.332s 00:03:02.407 user 0m0.548s 00:03:02.407 sys 0m0.746s 00:03:02.407 15:26:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:02.407 15:26:41 -- common/autotest_common.sh@10 -- # set +x 00:03:02.407 ************************************ 00:03:02.407 END TEST even_2G_alloc 00:03:02.407 ************************************ 00:03:02.407 15:26:41 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:02.407 15:26:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:02.407 15:26:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:02.407 15:26:41 -- common/autotest_common.sh@10 -- # set +x 00:03:02.407 ************************************ 00:03:02.407 START TEST odd_alloc 00:03:02.407 ************************************ 00:03:02.407 15:26:41 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:02.407 15:26:41 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:02.407 15:26:41 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:02.407 15:26:41 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:02.407 15:26:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:02.407 15:26:41 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:02.407 15:26:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:02.407 15:26:41 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:02.407 15:26:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:02.407 15:26:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:02.407 15:26:41 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:02.407 15:26:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:02.407 15:26:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:02.407 15:26:41 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:02.407 15:26:41 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:02.407 15:26:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:02.407 15:26:41 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:02.407 15:26:41 -- setup/hugepages.sh@83 -- # : 513 00:03:02.407 15:26:41 -- setup/hugepages.sh@84 -- # : 1 00:03:02.407 15:26:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:02.407 15:26:41 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:02.407 15:26:41 -- setup/hugepages.sh@83 -- # : 0 00:03:02.407 15:26:41 -- setup/hugepages.sh@84 -- # : 0 00:03:02.407 15:26:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:02.407 15:26:41 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:02.407 15:26:41 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:02.407 15:26:41 -- setup/hugepages.sh@160 -- # setup output 00:03:02.407 15:26:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:02.407 15:26:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:03.783 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:03.783 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:03.783 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:03.783 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:03.783 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:03.783 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:03.783 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:03.783 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:03.783 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:03.783 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:03.783 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:03.783 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:03.783 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:03.783 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:03.783 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:03.783 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:03.783 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:03.783 15:26:43 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:03.783 15:26:43 -- setup/hugepages.sh@89 -- # local node 00:03:03.783 15:26:43 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:03.783 15:26:43 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:03.783 15:26:43 -- setup/hugepages.sh@92 -- # local surp 00:03:03.783 15:26:43 -- setup/hugepages.sh@93 -- # local resv 00:03:03.783 15:26:43 -- setup/hugepages.sh@94 -- # local anon 00:03:03.783 15:26:43 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:03.783 15:26:43 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:03.783 15:26:43 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:03.783 15:26:43 -- setup/common.sh@18 -- # local node= 00:03:03.783 15:26:43 -- setup/common.sh@19 -- # local var val 00:03:03.783 15:26:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.783 15:26:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.783 15:26:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.783 15:26:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.783 15:26:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.783 15:26:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.783 15:26:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45733520 kB' 'MemAvailable: 49287560 kB' 'Buffers: 2704 kB' 'Cached: 10224952 kB' 'SwapCached: 0 kB' 'Active: 7340636 kB' 'Inactive: 3520476 kB' 'Active(anon): 6909752 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636776 kB' 'Mapped: 178572 kB' 'Shmem: 6276296 kB' 'KReclaimable: 192608 kB' 'Slab: 572856 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380248 kB' 'KernelStack: 12880 kB' 'PageTables: 8168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 8034344 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196564 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.783 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.783 15:26:43 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.784 15:26:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:03.784 15:26:43 -- setup/common.sh@33 -- # echo 0 00:03:03.784 15:26:43 -- setup/common.sh@33 -- # return 0 00:03:03.784 15:26:43 -- setup/hugepages.sh@97 -- # anon=0 00:03:03.784 15:26:43 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:03.784 15:26:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:03.784 15:26:43 -- setup/common.sh@18 -- # local node= 00:03:03.784 15:26:43 -- setup/common.sh@19 -- # local var val 00:03:03.784 15:26:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.784 15:26:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.784 15:26:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.784 15:26:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.784 15:26:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.784 15:26:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.784 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45738484 kB' 'MemAvailable: 49292524 kB' 'Buffers: 2704 kB' 'Cached: 10224960 kB' 'SwapCached: 0 kB' 'Active: 7340196 kB' 'Inactive: 3520476 kB' 'Active(anon): 6909312 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636344 kB' 'Mapped: 178572 kB' 'Shmem: 6276304 kB' 'KReclaimable: 192608 kB' 'Slab: 572856 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380248 kB' 'KernelStack: 12784 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 8034360 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196516 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.785 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.785 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:03.786 15:26:43 -- setup/common.sh@33 -- # echo 0 00:03:03.786 15:26:43 -- setup/common.sh@33 -- # return 0 00:03:03.786 15:26:43 -- setup/hugepages.sh@99 -- # surp=0 00:03:03.786 15:26:43 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:03.786 15:26:43 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:03.786 15:26:43 -- setup/common.sh@18 -- # local node= 00:03:03.786 15:26:43 -- setup/common.sh@19 -- # local var val 00:03:03.786 15:26:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.786 15:26:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.786 15:26:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.786 15:26:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.786 15:26:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.786 15:26:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45738700 kB' 'MemAvailable: 49292740 kB' 'Buffers: 2704 kB' 'Cached: 10224972 kB' 'SwapCached: 0 kB' 'Active: 7340084 kB' 'Inactive: 3520476 kB' 'Active(anon): 6909200 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635732 kB' 'Mapped: 178496 kB' 'Shmem: 6276316 kB' 'KReclaimable: 192608 kB' 'Slab: 572840 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380232 kB' 'KernelStack: 12896 kB' 'PageTables: 8152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 8035644 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196596 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.786 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.786 15:26:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.787 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.787 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:03.788 15:26:43 -- setup/common.sh@33 -- # echo 0 00:03:03.788 15:26:43 -- setup/common.sh@33 -- # return 0 00:03:03.788 15:26:43 -- setup/hugepages.sh@100 -- # resv=0 00:03:03.788 15:26:43 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:03.788 nr_hugepages=1025 00:03:03.788 15:26:43 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:03.788 resv_hugepages=0 00:03:03.788 15:26:43 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:03.788 surplus_hugepages=0 00:03:03.788 15:26:43 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:03.788 anon_hugepages=0 00:03:03.788 15:26:43 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:03.788 15:26:43 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:03.788 15:26:43 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:03.788 15:26:43 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:03.788 15:26:43 -- setup/common.sh@18 -- # local node= 00:03:03.788 15:26:43 -- setup/common.sh@19 -- # local var val 00:03:03.788 15:26:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:03.788 15:26:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:03.788 15:26:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:03.788 15:26:43 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:03.788 15:26:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:03.788 15:26:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.788 15:26:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45738700 kB' 'MemAvailable: 49292740 kB' 'Buffers: 2704 kB' 'Cached: 10224972 kB' 'SwapCached: 0 kB' 'Active: 7340444 kB' 'Inactive: 3520476 kB' 'Active(anon): 6909560 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636548 kB' 'Mapped: 178496 kB' 'Shmem: 6276316 kB' 'KReclaimable: 192608 kB' 'Slab: 572840 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380232 kB' 'KernelStack: 13120 kB' 'PageTables: 8580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609872 kB' 'Committed_AS: 8038776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.788 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.788 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:03.789 15:26:43 -- setup/common.sh@32 -- # continue 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:03.789 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.048 15:26:43 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.048 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.048 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.048 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.048 15:26:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.048 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.048 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.048 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.048 15:26:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.048 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.048 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.048 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.048 15:26:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.048 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.048 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.048 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.048 15:26:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.048 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.048 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.048 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.048 15:26:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.048 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.048 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.048 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.048 15:26:43 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.048 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.048 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.048 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.048 15:26:43 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.048 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.048 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.048 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.048 15:26:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.048 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.048 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.048 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.048 15:26:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:04.048 15:26:43 -- setup/common.sh@33 -- # echo 1025 00:03:04.048 15:26:43 -- setup/common.sh@33 -- # return 0 00:03:04.048 15:26:43 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:04.048 15:26:43 -- setup/hugepages.sh@112 -- # get_nodes 00:03:04.048 15:26:43 -- setup/hugepages.sh@27 -- # local node 00:03:04.048 15:26:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.048 15:26:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:04.048 15:26:43 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:04.048 15:26:43 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:04.048 15:26:43 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:04.049 15:26:43 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:04.049 15:26:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:04.049 15:26:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:04.049 15:26:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:04.049 15:26:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.049 15:26:43 -- setup/common.sh@18 -- # local node=0 00:03:04.049 15:26:43 -- setup/common.sh@19 -- # local var val 00:03:04.049 15:26:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.049 15:26:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.049 15:26:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:04.049 15:26:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:04.049 15:26:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.049 15:26:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21755344 kB' 'MemUsed: 11074540 kB' 'SwapCached: 0 kB' 'Active: 5559348 kB' 'Inactive: 3242324 kB' 'Active(anon): 5445896 kB' 'Inactive(anon): 0 kB' 'Active(file): 113452 kB' 'Inactive(file): 3242324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8338408 kB' 'Mapped: 41960 kB' 'AnonPages: 466472 kB' 'Shmem: 4982632 kB' 'KernelStack: 7608 kB' 'PageTables: 5580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98788 kB' 'Slab: 336452 kB' 'SReclaimable: 98788 kB' 'SUnreclaim: 237664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.049 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.049 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@33 -- # echo 0 00:03:04.050 15:26:43 -- setup/common.sh@33 -- # return 0 00:03:04.050 15:26:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:04.050 15:26:43 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:04.050 15:26:43 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:04.050 15:26:43 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:04.050 15:26:43 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:04.050 15:26:43 -- setup/common.sh@18 -- # local node=1 00:03:04.050 15:26:43 -- setup/common.sh@19 -- # local var val 00:03:04.050 15:26:43 -- setup/common.sh@20 -- # local mem_f mem 00:03:04.050 15:26:43 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:04.050 15:26:43 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:04.050 15:26:43 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:04.050 15:26:43 -- setup/common.sh@28 -- # mapfile -t mem 00:03:04.050 15:26:43 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 23980616 kB' 'MemUsed: 3731236 kB' 'SwapCached: 0 kB' 'Active: 1781048 kB' 'Inactive: 278152 kB' 'Active(anon): 1463616 kB' 'Inactive(anon): 0 kB' 'Active(file): 317432 kB' 'Inactive(file): 278152 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1889308 kB' 'Mapped: 136536 kB' 'AnonPages: 169932 kB' 'Shmem: 1293724 kB' 'KernelStack: 5656 kB' 'PageTables: 3132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93820 kB' 'Slab: 236388 kB' 'SReclaimable: 93820 kB' 'SUnreclaim: 142568 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.050 15:26:43 -- setup/common.sh@32 -- # continue 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # IFS=': ' 00:03:04.050 15:26:43 -- setup/common.sh@31 -- # read -r var val _ 00:03:04.051 15:26:43 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:04.051 15:26:43 -- setup/common.sh@33 -- # echo 0 00:03:04.051 15:26:43 -- setup/common.sh@33 -- # return 0 00:03:04.051 15:26:43 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:04.051 15:26:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:04.051 15:26:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:04.051 15:26:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:04.051 15:26:43 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:04.051 node0=512 expecting 513 00:03:04.051 15:26:43 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:04.051 15:26:43 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:04.051 15:26:43 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:04.051 15:26:43 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:04.051 node1=513 expecting 512 00:03:04.051 15:26:43 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:04.051 00:03:04.051 real 0m1.466s 00:03:04.051 user 0m0.592s 00:03:04.051 sys 0m0.838s 00:03:04.051 15:26:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:04.051 15:26:43 -- common/autotest_common.sh@10 -- # set +x 00:03:04.051 ************************************ 00:03:04.051 END TEST odd_alloc 00:03:04.051 ************************************ 00:03:04.051 15:26:43 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:04.051 15:26:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:04.051 15:26:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:04.051 15:26:43 -- common/autotest_common.sh@10 -- # set +x 00:03:04.051 ************************************ 00:03:04.051 START TEST custom_alloc 00:03:04.051 ************************************ 00:03:04.051 15:26:43 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:04.051 15:26:43 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:04.051 15:26:43 -- setup/hugepages.sh@169 -- # local node 00:03:04.051 15:26:43 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:04.051 15:26:43 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:04.051 15:26:43 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:04.051 15:26:43 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:04.051 15:26:43 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:04.051 15:26:43 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:04.051 15:26:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:04.051 15:26:43 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:04.051 15:26:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:04.051 15:26:43 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:04.051 15:26:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:04.051 15:26:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:04.051 15:26:43 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:04.051 15:26:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:04.051 15:26:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:04.051 15:26:43 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:04.051 15:26:43 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:04.051 15:26:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:04.051 15:26:43 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:04.051 15:26:43 -- setup/hugepages.sh@83 -- # : 256 00:03:04.051 15:26:43 -- setup/hugepages.sh@84 -- # : 1 00:03:04.051 15:26:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:04.051 15:26:43 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:04.051 15:26:43 -- setup/hugepages.sh@83 -- # : 0 00:03:04.051 15:26:43 -- setup/hugepages.sh@84 -- # : 0 00:03:04.051 15:26:43 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:04.051 15:26:43 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:04.051 15:26:43 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:04.051 15:26:43 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:04.051 15:26:43 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:04.051 15:26:43 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:04.051 15:26:43 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:04.051 15:26:43 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:04.051 15:26:43 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:04.051 15:26:43 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:04.051 15:26:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:04.051 15:26:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:04.051 15:26:43 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:04.051 15:26:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:04.051 15:26:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:04.051 15:26:43 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:04.051 15:26:43 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:04.051 15:26:43 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:04.051 15:26:43 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:04.051 15:26:43 -- setup/hugepages.sh@78 -- # return 0 00:03:04.051 15:26:43 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:04.051 15:26:43 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:04.051 15:26:43 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:04.051 15:26:43 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:04.051 15:26:43 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:04.051 15:26:43 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:04.051 15:26:43 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:04.051 15:26:43 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:04.051 15:26:43 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:04.051 15:26:43 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:04.051 15:26:43 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:04.051 15:26:43 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:04.051 15:26:43 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:04.051 15:26:43 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:04.051 15:26:43 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:04.051 15:26:43 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:04.051 15:26:43 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:04.051 15:26:43 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:04.051 15:26:43 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:04.051 15:26:43 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:04.051 15:26:43 -- setup/hugepages.sh@78 -- # return 0 00:03:04.051 15:26:43 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:04.051 15:26:43 -- setup/hugepages.sh@187 -- # setup output 00:03:04.051 15:26:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.051 15:26:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:05.430 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:05.430 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:05.430 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:05.430 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:05.430 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:05.430 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:05.430 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:05.430 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:05.430 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:05.430 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:05.430 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:05.430 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:05.430 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:05.430 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:05.430 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:05.430 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:05.430 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:05.430 15:26:44 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:05.430 15:26:44 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:05.430 15:26:44 -- setup/hugepages.sh@89 -- # local node 00:03:05.430 15:26:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:05.430 15:26:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:05.430 15:26:44 -- setup/hugepages.sh@92 -- # local surp 00:03:05.430 15:26:44 -- setup/hugepages.sh@93 -- # local resv 00:03:05.430 15:26:44 -- setup/hugepages.sh@94 -- # local anon 00:03:05.430 15:26:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:05.430 15:26:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:05.430 15:26:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:05.430 15:26:44 -- setup/common.sh@18 -- # local node= 00:03:05.430 15:26:44 -- setup/common.sh@19 -- # local var val 00:03:05.430 15:26:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.430 15:26:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.430 15:26:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.430 15:26:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.430 15:26:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.430 15:26:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.430 15:26:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 44681716 kB' 'MemAvailable: 48235756 kB' 'Buffers: 2704 kB' 'Cached: 10225052 kB' 'SwapCached: 0 kB' 'Active: 7340172 kB' 'Inactive: 3520476 kB' 'Active(anon): 6909288 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636136 kB' 'Mapped: 178504 kB' 'Shmem: 6276396 kB' 'KReclaimable: 192608 kB' 'Slab: 572832 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380224 kB' 'KernelStack: 12832 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 8034944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196612 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.430 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.430 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:05.431 15:26:44 -- setup/common.sh@33 -- # echo 0 00:03:05.431 15:26:44 -- setup/common.sh@33 -- # return 0 00:03:05.431 15:26:44 -- setup/hugepages.sh@97 -- # anon=0 00:03:05.431 15:26:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:05.431 15:26:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.431 15:26:44 -- setup/common.sh@18 -- # local node= 00:03:05.431 15:26:44 -- setup/common.sh@19 -- # local var val 00:03:05.431 15:26:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.431 15:26:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.431 15:26:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.431 15:26:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.431 15:26:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.431 15:26:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.431 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.431 15:26:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 44682576 kB' 'MemAvailable: 48236616 kB' 'Buffers: 2704 kB' 'Cached: 10225056 kB' 'SwapCached: 0 kB' 'Active: 7340240 kB' 'Inactive: 3520476 kB' 'Active(anon): 6909356 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636200 kB' 'Mapped: 178504 kB' 'Shmem: 6276400 kB' 'KReclaimable: 192608 kB' 'Slab: 572828 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380220 kB' 'KernelStack: 12784 kB' 'PageTables: 7812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 8034956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.431 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.432 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.432 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.433 15:26:44 -- setup/common.sh@33 -- # echo 0 00:03:05.433 15:26:44 -- setup/common.sh@33 -- # return 0 00:03:05.433 15:26:44 -- setup/hugepages.sh@99 -- # surp=0 00:03:05.433 15:26:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:05.433 15:26:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:05.433 15:26:44 -- setup/common.sh@18 -- # local node= 00:03:05.433 15:26:44 -- setup/common.sh@19 -- # local var val 00:03:05.433 15:26:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.433 15:26:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.433 15:26:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.433 15:26:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.433 15:26:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.433 15:26:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 44682576 kB' 'MemAvailable: 48236616 kB' 'Buffers: 2704 kB' 'Cached: 10225068 kB' 'SwapCached: 0 kB' 'Active: 7339600 kB' 'Inactive: 3520476 kB' 'Active(anon): 6908716 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635620 kB' 'Mapped: 178504 kB' 'Shmem: 6276412 kB' 'KReclaimable: 192608 kB' 'Slab: 572888 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380280 kB' 'KernelStack: 12864 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 8034972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.433 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.433 15:26:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.434 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.434 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:05.435 15:26:44 -- setup/common.sh@33 -- # echo 0 00:03:05.435 15:26:44 -- setup/common.sh@33 -- # return 0 00:03:05.435 15:26:44 -- setup/hugepages.sh@100 -- # resv=0 00:03:05.435 15:26:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:05.435 nr_hugepages=1536 00:03:05.435 15:26:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:05.435 resv_hugepages=0 00:03:05.435 15:26:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:05.435 surplus_hugepages=0 00:03:05.435 15:26:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:05.435 anon_hugepages=0 00:03:05.435 15:26:44 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:05.435 15:26:44 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:05.435 15:26:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:05.435 15:26:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:05.435 15:26:44 -- setup/common.sh@18 -- # local node= 00:03:05.435 15:26:44 -- setup/common.sh@19 -- # local var val 00:03:05.435 15:26:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.435 15:26:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.435 15:26:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:05.435 15:26:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:05.435 15:26:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.435 15:26:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.435 15:26:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 44682576 kB' 'MemAvailable: 48236616 kB' 'Buffers: 2704 kB' 'Cached: 10225080 kB' 'SwapCached: 0 kB' 'Active: 7339652 kB' 'Inactive: 3520476 kB' 'Active(anon): 6908768 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635608 kB' 'Mapped: 178504 kB' 'Shmem: 6276424 kB' 'KReclaimable: 192608 kB' 'Slab: 572888 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380280 kB' 'KernelStack: 12864 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086608 kB' 'Committed_AS: 8034984 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196580 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.435 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.435 15:26:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.436 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.436 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:05.437 15:26:44 -- setup/common.sh@33 -- # echo 1536 00:03:05.437 15:26:44 -- setup/common.sh@33 -- # return 0 00:03:05.437 15:26:44 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:05.437 15:26:44 -- setup/hugepages.sh@112 -- # get_nodes 00:03:05.437 15:26:44 -- setup/hugepages.sh@27 -- # local node 00:03:05.437 15:26:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.437 15:26:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:05.437 15:26:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:05.437 15:26:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:05.437 15:26:44 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:05.437 15:26:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:05.437 15:26:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.437 15:26:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.437 15:26:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:05.437 15:26:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.437 15:26:44 -- setup/common.sh@18 -- # local node=0 00:03:05.437 15:26:44 -- setup/common.sh@19 -- # local var val 00:03:05.437 15:26:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.437 15:26:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.437 15:26:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:05.437 15:26:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:05.437 15:26:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.437 15:26:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 21750172 kB' 'MemUsed: 11079712 kB' 'SwapCached: 0 kB' 'Active: 5559052 kB' 'Inactive: 3242324 kB' 'Active(anon): 5445600 kB' 'Inactive(anon): 0 kB' 'Active(file): 113452 kB' 'Inactive(file): 3242324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8338476 kB' 'Mapped: 41968 kB' 'AnonPages: 466120 kB' 'Shmem: 4982700 kB' 'KernelStack: 7208 kB' 'PageTables: 4812 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98788 kB' 'Slab: 336512 kB' 'SReclaimable: 98788 kB' 'SUnreclaim: 237724 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.437 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.437 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.438 15:26:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.438 15:26:44 -- setup/common.sh@33 -- # echo 0 00:03:05.438 15:26:44 -- setup/common.sh@33 -- # return 0 00:03:05.438 15:26:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.438 15:26:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:05.438 15:26:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:05.438 15:26:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:05.438 15:26:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:05.438 15:26:44 -- setup/common.sh@18 -- # local node=1 00:03:05.438 15:26:44 -- setup/common.sh@19 -- # local var val 00:03:05.438 15:26:44 -- setup/common.sh@20 -- # local mem_f mem 00:03:05.438 15:26:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:05.438 15:26:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:05.438 15:26:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:05.438 15:26:44 -- setup/common.sh@28 -- # mapfile -t mem 00:03:05.438 15:26:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.438 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711852 kB' 'MemFree: 22932808 kB' 'MemUsed: 4779044 kB' 'SwapCached: 0 kB' 'Active: 1780608 kB' 'Inactive: 278152 kB' 'Active(anon): 1463176 kB' 'Inactive(anon): 0 kB' 'Active(file): 317432 kB' 'Inactive(file): 278152 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1889324 kB' 'Mapped: 136536 kB' 'AnonPages: 169504 kB' 'Shmem: 1293740 kB' 'KernelStack: 5656 kB' 'PageTables: 3248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 93820 kB' 'Slab: 236376 kB' 'SReclaimable: 93820 kB' 'SUnreclaim: 142556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.439 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.439 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.440 15:26:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.440 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.440 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.440 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.440 15:26:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.440 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.440 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.440 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.440 15:26:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.440 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.440 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.440 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.440 15:26:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.440 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.440 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.440 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.440 15:26:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.440 15:26:44 -- setup/common.sh@32 -- # continue 00:03:05.440 15:26:44 -- setup/common.sh@31 -- # IFS=': ' 00:03:05.440 15:26:44 -- setup/common.sh@31 -- # read -r var val _ 00:03:05.440 15:26:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:05.440 15:26:44 -- setup/common.sh@33 -- # echo 0 00:03:05.440 15:26:44 -- setup/common.sh@33 -- # return 0 00:03:05.440 15:26:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:05.440 15:26:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.440 15:26:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.440 15:26:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.440 15:26:44 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:05.440 node0=512 expecting 512 00:03:05.440 15:26:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:05.440 15:26:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:05.440 15:26:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:05.440 15:26:44 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:05.440 node1=1024 expecting 1024 00:03:05.440 15:26:44 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:05.440 00:03:05.440 real 0m1.491s 00:03:05.440 user 0m0.643s 00:03:05.440 sys 0m0.816s 00:03:05.440 15:26:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:05.440 15:26:44 -- common/autotest_common.sh@10 -- # set +x 00:03:05.440 ************************************ 00:03:05.440 END TEST custom_alloc 00:03:05.440 ************************************ 00:03:05.440 15:26:44 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:05.440 15:26:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:05.440 15:26:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:05.440 15:26:44 -- common/autotest_common.sh@10 -- # set +x 00:03:05.440 ************************************ 00:03:05.440 START TEST no_shrink_alloc 00:03:05.440 ************************************ 00:03:05.440 15:26:44 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:05.440 15:26:44 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:05.440 15:26:44 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:05.440 15:26:44 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:05.440 15:26:44 -- setup/hugepages.sh@51 -- # shift 00:03:05.440 15:26:44 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:05.440 15:26:44 -- setup/hugepages.sh@52 -- # local node_ids 00:03:05.440 15:26:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:05.440 15:26:44 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:05.440 15:26:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:05.440 15:26:44 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:05.440 15:26:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:05.440 15:26:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:05.440 15:26:44 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:05.440 15:26:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:05.440 15:26:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:05.440 15:26:44 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:05.440 15:26:44 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:05.440 15:26:44 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:05.440 15:26:44 -- setup/hugepages.sh@73 -- # return 0 00:03:05.440 15:26:44 -- setup/hugepages.sh@198 -- # setup output 00:03:05.440 15:26:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:05.440 15:26:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:06.817 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:06.817 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:06.817 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:06.817 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:06.817 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:06.817 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:06.817 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:06.817 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:06.817 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:06.817 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:06.817 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:06.817 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:06.817 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:06.817 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:06.817 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:06.817 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:06.817 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:06.817 15:26:45 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:06.817 15:26:45 -- setup/hugepages.sh@89 -- # local node 00:03:06.817 15:26:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:06.817 15:26:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:06.817 15:26:45 -- setup/hugepages.sh@92 -- # local surp 00:03:06.817 15:26:45 -- setup/hugepages.sh@93 -- # local resv 00:03:06.817 15:26:45 -- setup/hugepages.sh@94 -- # local anon 00:03:06.817 15:26:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:06.817 15:26:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:06.817 15:26:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:06.817 15:26:45 -- setup/common.sh@18 -- # local node= 00:03:06.817 15:26:45 -- setup/common.sh@19 -- # local var val 00:03:06.817 15:26:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.817 15:26:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.817 15:26:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.817 15:26:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.817 15:26:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.817 15:26:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.817 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.817 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45719028 kB' 'MemAvailable: 49273068 kB' 'Buffers: 2704 kB' 'Cached: 10225140 kB' 'SwapCached: 0 kB' 'Active: 7340028 kB' 'Inactive: 3520476 kB' 'Active(anon): 6909144 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 635808 kB' 'Mapped: 178508 kB' 'Shmem: 6276484 kB' 'KReclaimable: 192608 kB' 'Slab: 573080 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380472 kB' 'KernelStack: 12832 kB' 'PageTables: 7960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8035032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # continue 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.818 15:26:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.818 15:26:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:06.818 15:26:45 -- setup/common.sh@33 -- # echo 0 00:03:06.818 15:26:45 -- setup/common.sh@33 -- # return 0 00:03:06.818 15:26:45 -- setup/hugepages.sh@97 -- # anon=0 00:03:06.818 15:26:46 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:06.818 15:26:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.818 15:26:46 -- setup/common.sh@18 -- # local node= 00:03:06.818 15:26:46 -- setup/common.sh@19 -- # local var val 00:03:06.818 15:26:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.818 15:26:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.818 15:26:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.819 15:26:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.819 15:26:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.819 15:26:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45723680 kB' 'MemAvailable: 49277720 kB' 'Buffers: 2704 kB' 'Cached: 10225140 kB' 'SwapCached: 0 kB' 'Active: 7340456 kB' 'Inactive: 3520476 kB' 'Active(anon): 6909572 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636268 kB' 'Mapped: 178508 kB' 'Shmem: 6276484 kB' 'KReclaimable: 192608 kB' 'Slab: 573096 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380488 kB' 'KernelStack: 12832 kB' 'PageTables: 7944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8035044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196676 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.819 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.819 15:26:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.820 15:26:46 -- setup/common.sh@33 -- # echo 0 00:03:06.820 15:26:46 -- setup/common.sh@33 -- # return 0 00:03:06.820 15:26:46 -- setup/hugepages.sh@99 -- # surp=0 00:03:06.820 15:26:46 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:06.820 15:26:46 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:06.820 15:26:46 -- setup/common.sh@18 -- # local node= 00:03:06.820 15:26:46 -- setup/common.sh@19 -- # local var val 00:03:06.820 15:26:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.820 15:26:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.820 15:26:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.820 15:26:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.820 15:26:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.820 15:26:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45724144 kB' 'MemAvailable: 49278184 kB' 'Buffers: 2704 kB' 'Cached: 10225152 kB' 'SwapCached: 0 kB' 'Active: 7340240 kB' 'Inactive: 3520476 kB' 'Active(anon): 6909356 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636028 kB' 'Mapped: 178508 kB' 'Shmem: 6276496 kB' 'KReclaimable: 192608 kB' 'Slab: 573176 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380568 kB' 'KernelStack: 12896 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8035060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.820 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.820 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:06.821 15:26:46 -- setup/common.sh@33 -- # echo 0 00:03:06.821 15:26:46 -- setup/common.sh@33 -- # return 0 00:03:06.821 15:26:46 -- setup/hugepages.sh@100 -- # resv=0 00:03:06.821 15:26:46 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:06.821 nr_hugepages=1024 00:03:06.821 15:26:46 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:06.821 resv_hugepages=0 00:03:06.821 15:26:46 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:06.821 surplus_hugepages=0 00:03:06.821 15:26:46 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:06.821 anon_hugepages=0 00:03:06.821 15:26:46 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.821 15:26:46 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:06.821 15:26:46 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:06.821 15:26:46 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:06.821 15:26:46 -- setup/common.sh@18 -- # local node= 00:03:06.821 15:26:46 -- setup/common.sh@19 -- # local var val 00:03:06.821 15:26:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.821 15:26:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.821 15:26:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:06.821 15:26:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:06.821 15:26:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.821 15:26:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45724144 kB' 'MemAvailable: 49278184 kB' 'Buffers: 2704 kB' 'Cached: 10225164 kB' 'SwapCached: 0 kB' 'Active: 7340248 kB' 'Inactive: 3520476 kB' 'Active(anon): 6909364 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636024 kB' 'Mapped: 178508 kB' 'Shmem: 6276508 kB' 'KReclaimable: 192608 kB' 'Slab: 573176 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380568 kB' 'KernelStack: 12896 kB' 'PageTables: 8100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8035076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196660 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.821 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.821 15:26:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.822 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.822 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:06.823 15:26:46 -- setup/common.sh@33 -- # echo 1024 00:03:06.823 15:26:46 -- setup/common.sh@33 -- # return 0 00:03:06.823 15:26:46 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:06.823 15:26:46 -- setup/hugepages.sh@112 -- # get_nodes 00:03:06.823 15:26:46 -- setup/hugepages.sh@27 -- # local node 00:03:06.823 15:26:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.823 15:26:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:06.823 15:26:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:06.823 15:26:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:06.823 15:26:46 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:06.823 15:26:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:06.823 15:26:46 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:06.823 15:26:46 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:06.823 15:26:46 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:06.823 15:26:46 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:06.823 15:26:46 -- setup/common.sh@18 -- # local node=0 00:03:06.823 15:26:46 -- setup/common.sh@19 -- # local var val 00:03:06.823 15:26:46 -- setup/common.sh@20 -- # local mem_f mem 00:03:06.823 15:26:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:06.823 15:26:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:06.823 15:26:46 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:06.823 15:26:46 -- setup/common.sh@28 -- # mapfile -t mem 00:03:06.823 15:26:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20701912 kB' 'MemUsed: 12127972 kB' 'SwapCached: 0 kB' 'Active: 5559620 kB' 'Inactive: 3242324 kB' 'Active(anon): 5446168 kB' 'Inactive(anon): 0 kB' 'Active(file): 113452 kB' 'Inactive(file): 3242324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8338556 kB' 'Mapped: 41972 kB' 'AnonPages: 466560 kB' 'Shmem: 4982780 kB' 'KernelStack: 7224 kB' 'PageTables: 4852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98788 kB' 'Slab: 336680 kB' 'SReclaimable: 98788 kB' 'SUnreclaim: 237892 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.823 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.823 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.824 15:26:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.824 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.824 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.824 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.824 15:26:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.824 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.824 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.824 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.824 15:26:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.824 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.824 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.824 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.824 15:26:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.824 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.824 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.824 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.824 15:26:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.824 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.824 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.824 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.824 15:26:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.824 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.824 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.824 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.824 15:26:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.824 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.824 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.824 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.824 15:26:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.824 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.824 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.824 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.824 15:26:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.824 15:26:46 -- setup/common.sh@32 -- # continue 00:03:06.824 15:26:46 -- setup/common.sh@31 -- # IFS=': ' 00:03:06.824 15:26:46 -- setup/common.sh@31 -- # read -r var val _ 00:03:06.824 15:26:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:06.824 15:26:46 -- setup/common.sh@33 -- # echo 0 00:03:06.824 15:26:46 -- setup/common.sh@33 -- # return 0 00:03:06.824 15:26:46 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:06.824 15:26:46 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:06.824 15:26:46 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:06.824 15:26:46 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:06.824 15:26:46 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:06.824 node0=1024 expecting 1024 00:03:06.824 15:26:46 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:06.824 15:26:46 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:06.824 15:26:46 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:06.824 15:26:46 -- setup/hugepages.sh@202 -- # setup output 00:03:06.824 15:26:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.824 15:26:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:08.201 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:08.201 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:08.201 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:08.201 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:08.201 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:08.201 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:08.201 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:08.201 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:08.201 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:08.201 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:08.201 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:08.201 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:08.201 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:08.201 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:08.201 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:08.201 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:08.201 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:08.201 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:08.201 15:26:47 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:08.202 15:26:47 -- setup/hugepages.sh@89 -- # local node 00:03:08.202 15:26:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:08.202 15:26:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:08.202 15:26:47 -- setup/hugepages.sh@92 -- # local surp 00:03:08.202 15:26:47 -- setup/hugepages.sh@93 -- # local resv 00:03:08.202 15:26:47 -- setup/hugepages.sh@94 -- # local anon 00:03:08.202 15:26:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:08.202 15:26:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:08.202 15:26:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:08.202 15:26:47 -- setup/common.sh@18 -- # local node= 00:03:08.202 15:26:47 -- setup/common.sh@19 -- # local var val 00:03:08.202 15:26:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.202 15:26:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.202 15:26:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.202 15:26:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.202 15:26:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.202 15:26:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45743648 kB' 'MemAvailable: 49297688 kB' 'Buffers: 2704 kB' 'Cached: 10225220 kB' 'SwapCached: 0 kB' 'Active: 7340388 kB' 'Inactive: 3520476 kB' 'Active(anon): 6909504 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 636128 kB' 'Mapped: 178580 kB' 'Shmem: 6276564 kB' 'KReclaimable: 192608 kB' 'Slab: 573016 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380408 kB' 'KernelStack: 12912 kB' 'PageTables: 8084 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8036140 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196724 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.202 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.202 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:08.203 15:26:47 -- setup/common.sh@33 -- # echo 0 00:03:08.203 15:26:47 -- setup/common.sh@33 -- # return 0 00:03:08.203 15:26:47 -- setup/hugepages.sh@97 -- # anon=0 00:03:08.203 15:26:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:08.203 15:26:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.203 15:26:47 -- setup/common.sh@18 -- # local node= 00:03:08.203 15:26:47 -- setup/common.sh@19 -- # local var val 00:03:08.203 15:26:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.203 15:26:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.203 15:26:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.203 15:26:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.203 15:26:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.203 15:26:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45753752 kB' 'MemAvailable: 49307792 kB' 'Buffers: 2704 kB' 'Cached: 10225224 kB' 'SwapCached: 0 kB' 'Active: 7342648 kB' 'Inactive: 3520476 kB' 'Active(anon): 6911764 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 638356 kB' 'Mapped: 178956 kB' 'Shmem: 6276568 kB' 'KReclaimable: 192608 kB' 'Slab: 573016 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380408 kB' 'KernelStack: 12848 kB' 'PageTables: 7904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8038064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196644 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.203 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.203 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.204 15:26:47 -- setup/common.sh@33 -- # echo 0 00:03:08.204 15:26:47 -- setup/common.sh@33 -- # return 0 00:03:08.204 15:26:47 -- setup/hugepages.sh@99 -- # surp=0 00:03:08.204 15:26:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:08.204 15:26:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:08.204 15:26:47 -- setup/common.sh@18 -- # local node= 00:03:08.204 15:26:47 -- setup/common.sh@19 -- # local var val 00:03:08.204 15:26:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.204 15:26:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.204 15:26:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.204 15:26:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.204 15:26:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.204 15:26:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45746220 kB' 'MemAvailable: 49300260 kB' 'Buffers: 2704 kB' 'Cached: 10225224 kB' 'SwapCached: 0 kB' 'Active: 7346132 kB' 'Inactive: 3520476 kB' 'Active(anon): 6915248 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641876 kB' 'Mapped: 178952 kB' 'Shmem: 6276568 kB' 'KReclaimable: 192608 kB' 'Slab: 573048 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380440 kB' 'KernelStack: 12976 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8041024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196648 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.204 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.204 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.205 15:26:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:08.205 15:26:47 -- setup/common.sh@33 -- # echo 0 00:03:08.205 15:26:47 -- setup/common.sh@33 -- # return 0 00:03:08.205 15:26:47 -- setup/hugepages.sh@100 -- # resv=0 00:03:08.205 15:26:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:08.205 nr_hugepages=1024 00:03:08.205 15:26:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:08.205 resv_hugepages=0 00:03:08.205 15:26:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:08.205 surplus_hugepages=0 00:03:08.205 15:26:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:08.205 anon_hugepages=0 00:03:08.205 15:26:47 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.205 15:26:47 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:08.205 15:26:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:08.205 15:26:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:08.205 15:26:47 -- setup/common.sh@18 -- # local node= 00:03:08.205 15:26:47 -- setup/common.sh@19 -- # local var val 00:03:08.205 15:26:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.205 15:26:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.205 15:26:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:08.205 15:26:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:08.205 15:26:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.205 15:26:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.205 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541736 kB' 'MemFree: 45749068 kB' 'MemAvailable: 49303108 kB' 'Buffers: 2704 kB' 'Cached: 10225252 kB' 'SwapCached: 0 kB' 'Active: 7345508 kB' 'Inactive: 3520476 kB' 'Active(anon): 6914624 kB' 'Inactive(anon): 0 kB' 'Active(file): 430884 kB' 'Inactive(file): 3520476 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 641288 kB' 'Mapped: 179432 kB' 'Shmem: 6276596 kB' 'KReclaimable: 192608 kB' 'Slab: 573064 kB' 'SReclaimable: 192608 kB' 'SUnreclaim: 380456 kB' 'KernelStack: 12832 kB' 'PageTables: 7808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610896 kB' 'Committed_AS: 8041044 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196600 kB' 'VmallocChunk: 0 kB' 'Percpu: 36096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2068060 kB' 'DirectMap2M: 24066048 kB' 'DirectMap1G: 42991616 kB' 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.206 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.206 15:26:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:08.207 15:26:47 -- setup/common.sh@33 -- # echo 1024 00:03:08.207 15:26:47 -- setup/common.sh@33 -- # return 0 00:03:08.207 15:26:47 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:08.207 15:26:47 -- setup/hugepages.sh@112 -- # get_nodes 00:03:08.207 15:26:47 -- setup/hugepages.sh@27 -- # local node 00:03:08.207 15:26:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.207 15:26:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:08.207 15:26:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:08.207 15:26:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:08.207 15:26:47 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:08.207 15:26:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:08.207 15:26:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:08.207 15:26:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:08.207 15:26:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:08.207 15:26:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:08.207 15:26:47 -- setup/common.sh@18 -- # local node=0 00:03:08.207 15:26:47 -- setup/common.sh@19 -- # local var val 00:03:08.207 15:26:47 -- setup/common.sh@20 -- # local mem_f mem 00:03:08.207 15:26:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:08.207 15:26:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:08.207 15:26:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:08.207 15:26:47 -- setup/common.sh@28 -- # mapfile -t mem 00:03:08.207 15:26:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 20727344 kB' 'MemUsed: 12102540 kB' 'SwapCached: 0 kB' 'Active: 5558724 kB' 'Inactive: 3242324 kB' 'Active(anon): 5445272 kB' 'Inactive(anon): 0 kB' 'Active(file): 113452 kB' 'Inactive(file): 3242324 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8338636 kB' 'Mapped: 41980 kB' 'AnonPages: 465560 kB' 'Shmem: 4982860 kB' 'KernelStack: 7144 kB' 'PageTables: 4544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 98788 kB' 'Slab: 336504 kB' 'SReclaimable: 98788 kB' 'SUnreclaim: 237716 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.207 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.207 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # continue 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # IFS=': ' 00:03:08.208 15:26:47 -- setup/common.sh@31 -- # read -r var val _ 00:03:08.208 15:26:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:08.208 15:26:47 -- setup/common.sh@33 -- # echo 0 00:03:08.208 15:26:47 -- setup/common.sh@33 -- # return 0 00:03:08.208 15:26:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:08.208 15:26:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:08.208 15:26:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:08.208 15:26:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:08.208 15:26:47 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:08.208 node0=1024 expecting 1024 00:03:08.208 15:26:47 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:08.208 00:03:08.208 real 0m2.709s 00:03:08.208 user 0m1.089s 00:03:08.208 sys 0m1.545s 00:03:08.208 15:26:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.208 15:26:47 -- common/autotest_common.sh@10 -- # set +x 00:03:08.208 ************************************ 00:03:08.208 END TEST no_shrink_alloc 00:03:08.208 ************************************ 00:03:08.208 15:26:47 -- setup/hugepages.sh@217 -- # clear_hp 00:03:08.208 15:26:47 -- setup/hugepages.sh@37 -- # local node hp 00:03:08.208 15:26:47 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:08.208 15:26:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.208 15:26:47 -- setup/hugepages.sh@41 -- # echo 0 00:03:08.208 15:26:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.208 15:26:47 -- setup/hugepages.sh@41 -- # echo 0 00:03:08.208 15:26:47 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:08.208 15:26:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.208 15:26:47 -- setup/hugepages.sh@41 -- # echo 0 00:03:08.208 15:26:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:08.208 15:26:47 -- setup/hugepages.sh@41 -- # echo 0 00:03:08.208 15:26:47 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:08.208 15:26:47 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:08.208 00:03:08.208 real 0m11.165s 00:03:08.208 user 0m4.265s 00:03:08.208 sys 0m5.834s 00:03:08.208 15:26:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:08.208 15:26:47 -- common/autotest_common.sh@10 -- # set +x 00:03:08.208 ************************************ 00:03:08.208 END TEST hugepages 00:03:08.208 ************************************ 00:03:08.208 15:26:47 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:08.208 15:26:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:08.208 15:26:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:08.208 15:26:47 -- common/autotest_common.sh@10 -- # set +x 00:03:08.208 ************************************ 00:03:08.208 START TEST driver 00:03:08.208 ************************************ 00:03:08.208 15:26:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:08.208 * Looking for test storage... 00:03:08.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:08.208 15:26:47 -- setup/driver.sh@68 -- # setup reset 00:03:08.208 15:26:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:08.208 15:26:47 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:10.736 15:26:49 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:10.736 15:26:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:10.736 15:26:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:10.736 15:26:49 -- common/autotest_common.sh@10 -- # set +x 00:03:10.736 ************************************ 00:03:10.736 START TEST guess_driver 00:03:10.736 ************************************ 00:03:10.736 15:26:49 -- common/autotest_common.sh@1104 -- # guess_driver 00:03:10.736 15:26:49 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:10.736 15:26:49 -- setup/driver.sh@47 -- # local fail=0 00:03:10.736 15:26:49 -- setup/driver.sh@49 -- # pick_driver 00:03:10.736 15:26:49 -- setup/driver.sh@36 -- # vfio 00:03:10.736 15:26:49 -- setup/driver.sh@21 -- # local iommu_grups 00:03:10.736 15:26:49 -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:10.736 15:26:49 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:10.736 15:26:49 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:10.736 15:26:49 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:10.736 15:26:49 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:10.736 15:26:49 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:10.736 15:26:49 -- setup/driver.sh@14 -- # mod vfio_pci 00:03:10.736 15:26:49 -- setup/driver.sh@12 -- # dep vfio_pci 00:03:10.736 15:26:49 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:10.736 15:26:49 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:10.736 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:10.736 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:10.736 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:10.736 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:10.736 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:10.736 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:10.736 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:10.736 15:26:49 -- setup/driver.sh@30 -- # return 0 00:03:10.736 15:26:49 -- setup/driver.sh@37 -- # echo vfio-pci 00:03:10.736 15:26:49 -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:10.736 15:26:49 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:10.736 15:26:49 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:10.736 Looking for driver=vfio-pci 00:03:10.736 15:26:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:10.736 15:26:49 -- setup/driver.sh@45 -- # setup output config 00:03:10.736 15:26:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:10.736 15:26:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:12.110 15:26:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.110 15:26:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.110 15:26:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.110 15:26:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.110 15:26:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.110 15:26:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.110 15:26:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.110 15:26:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.111 15:26:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.111 15:26:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.111 15:26:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.111 15:26:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.111 15:26:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.111 15:26:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.111 15:26:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.111 15:26:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.111 15:26:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.111 15:26:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.111 15:26:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.111 15:26:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.111 15:26:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.111 15:26:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.111 15:26:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.111 15:26:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.111 15:26:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.111 15:26:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.111 15:26:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.111 15:26:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.111 15:26:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.111 15:26:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.111 15:26:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.111 15:26:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.111 15:26:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.111 15:26:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.111 15:26:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.111 15:26:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.111 15:26:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.111 15:26:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.111 15:26:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.111 15:26:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.111 15:26:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.111 15:26:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.111 15:26:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.111 15:26:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.111 15:26:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:12.111 15:26:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:12.111 15:26:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:12.111 15:26:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.047 15:26:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:13.047 15:26:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:13.047 15:26:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:13.047 15:26:52 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:13.047 15:26:52 -- setup/driver.sh@65 -- # setup reset 00:03:13.047 15:26:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:13.047 15:26:52 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:15.621 00:03:15.621 real 0m4.901s 00:03:15.621 user 0m1.072s 00:03:15.621 sys 0m1.931s 00:03:15.621 15:26:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.621 15:26:54 -- common/autotest_common.sh@10 -- # set +x 00:03:15.621 ************************************ 00:03:15.621 END TEST guess_driver 00:03:15.621 ************************************ 00:03:15.621 00:03:15.621 real 0m7.376s 00:03:15.621 user 0m1.591s 00:03:15.621 sys 0m2.882s 00:03:15.621 15:26:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:15.621 15:26:54 -- common/autotest_common.sh@10 -- # set +x 00:03:15.621 ************************************ 00:03:15.621 END TEST driver 00:03:15.621 ************************************ 00:03:15.621 15:26:54 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:15.621 15:26:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:15.621 15:26:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:15.621 15:26:54 -- common/autotest_common.sh@10 -- # set +x 00:03:15.621 ************************************ 00:03:15.621 START TEST devices 00:03:15.621 ************************************ 00:03:15.621 15:26:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:15.621 * Looking for test storage... 00:03:15.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:15.621 15:26:54 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:15.621 15:26:54 -- setup/devices.sh@192 -- # setup reset 00:03:15.621 15:26:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:15.621 15:26:54 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:17.519 15:26:56 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:17.519 15:26:56 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:17.519 15:26:56 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:17.519 15:26:56 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:17.519 15:26:56 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:17.519 15:26:56 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:17.519 15:26:56 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:17.519 15:26:56 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:17.519 15:26:56 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:17.519 15:26:56 -- setup/devices.sh@196 -- # blocks=() 00:03:17.519 15:26:56 -- setup/devices.sh@196 -- # declare -a blocks 00:03:17.519 15:26:56 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:17.519 15:26:56 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:17.519 15:26:56 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:17.519 15:26:56 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:17.519 15:26:56 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:17.519 15:26:56 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:17.519 15:26:56 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:17.519 15:26:56 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:17.519 15:26:56 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:17.519 15:26:56 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:03:17.519 15:26:56 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:17.519 No valid GPT data, bailing 00:03:17.519 15:26:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:17.519 15:26:56 -- scripts/common.sh@393 -- # pt= 00:03:17.519 15:26:56 -- scripts/common.sh@394 -- # return 1 00:03:17.519 15:26:56 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:17.519 15:26:56 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:17.520 15:26:56 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:17.520 15:26:56 -- setup/common.sh@80 -- # echo 1000204886016 00:03:17.520 15:26:56 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:17.520 15:26:56 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:17.520 15:26:56 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:17.520 15:26:56 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:17.520 15:26:56 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:17.520 15:26:56 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:17.520 15:26:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:17.520 15:26:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:17.520 15:26:56 -- common/autotest_common.sh@10 -- # set +x 00:03:17.520 ************************************ 00:03:17.520 START TEST nvme_mount 00:03:17.520 ************************************ 00:03:17.520 15:26:56 -- common/autotest_common.sh@1104 -- # nvme_mount 00:03:17.520 15:26:56 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:17.520 15:26:56 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:17.520 15:26:56 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:17.520 15:26:56 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:17.520 15:26:56 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:17.520 15:26:56 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:17.520 15:26:56 -- setup/common.sh@40 -- # local part_no=1 00:03:17.520 15:26:56 -- setup/common.sh@41 -- # local size=1073741824 00:03:17.520 15:26:56 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:17.520 15:26:56 -- setup/common.sh@44 -- # parts=() 00:03:17.520 15:26:56 -- setup/common.sh@44 -- # local parts 00:03:17.520 15:26:56 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:17.520 15:26:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:17.520 15:26:56 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:17.520 15:26:56 -- setup/common.sh@46 -- # (( part++ )) 00:03:17.520 15:26:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:17.520 15:26:56 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:17.520 15:26:56 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:17.520 15:26:56 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:18.456 Creating new GPT entries in memory. 00:03:18.456 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:18.456 other utilities. 00:03:18.456 15:26:57 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:18.456 15:26:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:18.456 15:26:57 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:18.456 15:26:57 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:18.456 15:26:57 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:19.394 Creating new GPT entries in memory. 00:03:19.394 The operation has completed successfully. 00:03:19.394 15:26:58 -- setup/common.sh@57 -- # (( part++ )) 00:03:19.394 15:26:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:19.394 15:26:58 -- setup/common.sh@62 -- # wait 1980900 00:03:19.394 15:26:58 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.394 15:26:58 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:19.394 15:26:58 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.394 15:26:58 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:19.394 15:26:58 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:19.394 15:26:58 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.394 15:26:58 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:19.394 15:26:58 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:19.394 15:26:58 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:19.394 15:26:58 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:19.394 15:26:58 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:19.394 15:26:58 -- setup/devices.sh@53 -- # local found=0 00:03:19.394 15:26:58 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:19.394 15:26:58 -- setup/devices.sh@56 -- # : 00:03:19.394 15:26:58 -- setup/devices.sh@59 -- # local pci status 00:03:19.394 15:26:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:19.394 15:26:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:19.394 15:26:58 -- setup/devices.sh@47 -- # setup output config 00:03:19.394 15:26:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:19.394 15:26:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:20.328 15:26:59 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:20.328 15:26:59 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:20.328 15:26:59 -- setup/devices.sh@63 -- # found=1 00:03:20.328 15:26:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.328 15:26:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:20.328 15:26:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.328 15:26:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:20.328 15:26:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.328 15:26:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:20.328 15:26:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.328 15:26:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:20.328 15:26:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.328 15:26:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:20.328 15:26:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.328 15:26:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:20.328 15:26:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.328 15:26:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:20.328 15:26:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.328 15:26:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:20.328 15:26:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.328 15:26:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:20.328 15:26:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.328 15:26:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:20.328 15:26:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.328 15:26:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:20.328 15:26:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.328 15:26:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:20.328 15:26:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.328 15:26:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:20.328 15:26:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.328 15:26:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:20.328 15:26:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.328 15:26:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:20.328 15:26:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.328 15:26:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:20.328 15:26:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.586 15:26:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:20.586 15:26:59 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:20.586 15:26:59 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.586 15:26:59 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:20.586 15:26:59 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:20.586 15:26:59 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:20.586 15:26:59 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.586 15:26:59 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.586 15:26:59 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:20.586 15:26:59 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:20.586 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:20.586 15:26:59 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:20.586 15:26:59 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:20.844 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:20.844 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:20.844 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:20.844 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:20.844 15:27:00 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:20.844 15:27:00 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:20.844 15:27:00 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.844 15:27:00 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:20.844 15:27:00 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:20.844 15:27:00 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.844 15:27:00 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:20.844 15:27:00 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:20.844 15:27:00 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:20.844 15:27:00 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:20.844 15:27:00 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:20.844 15:27:00 -- setup/devices.sh@53 -- # local found=0 00:03:20.844 15:27:00 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:20.844 15:27:00 -- setup/devices.sh@56 -- # : 00:03:20.844 15:27:00 -- setup/devices.sh@59 -- # local pci status 00:03:20.844 15:27:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:20.844 15:27:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:20.844 15:27:00 -- setup/devices.sh@47 -- # setup output config 00:03:20.844 15:27:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.844 15:27:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:22.217 15:27:01 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.217 15:27:01 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:22.217 15:27:01 -- setup/devices.sh@63 -- # found=1 00:03:22.217 15:27:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.217 15:27:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.217 15:27:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.218 15:27:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.218 15:27:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.218 15:27:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.218 15:27:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.218 15:27:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.218 15:27:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.218 15:27:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.218 15:27:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.218 15:27:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.218 15:27:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.218 15:27:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.218 15:27:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.218 15:27:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.218 15:27:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.218 15:27:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.218 15:27:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.218 15:27:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.218 15:27:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.218 15:27:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.218 15:27:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.218 15:27:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.218 15:27:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.218 15:27:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.218 15:27:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.218 15:27:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.218 15:27:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.218 15:27:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.218 15:27:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.218 15:27:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:22.218 15:27:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.218 15:27:01 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:22.218 15:27:01 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:22.218 15:27:01 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.218 15:27:01 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:22.218 15:27:01 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:22.218 15:27:01 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:22.218 15:27:01 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:22.218 15:27:01 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:22.218 15:27:01 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:22.218 15:27:01 -- setup/devices.sh@50 -- # local mount_point= 00:03:22.218 15:27:01 -- setup/devices.sh@51 -- # local test_file= 00:03:22.218 15:27:01 -- setup/devices.sh@53 -- # local found=0 00:03:22.218 15:27:01 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:22.218 15:27:01 -- setup/devices.sh@59 -- # local pci status 00:03:22.218 15:27:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:22.218 15:27:01 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:22.218 15:27:01 -- setup/devices.sh@47 -- # setup output config 00:03:22.218 15:27:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:22.218 15:27:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:23.155 15:27:02 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:23.155 15:27:02 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:23.155 15:27:02 -- setup/devices.sh@63 -- # found=1 00:03:23.155 15:27:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.155 15:27:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:23.155 15:27:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.155 15:27:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:23.155 15:27:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.155 15:27:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:23.155 15:27:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.155 15:27:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:23.155 15:27:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.155 15:27:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:23.155 15:27:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.155 15:27:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:23.155 15:27:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.155 15:27:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:23.155 15:27:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.155 15:27:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:23.155 15:27:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.155 15:27:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:23.155 15:27:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.155 15:27:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:23.155 15:27:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.155 15:27:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:23.155 15:27:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.155 15:27:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:23.155 15:27:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.155 15:27:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:23.155 15:27:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.155 15:27:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:23.155 15:27:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.155 15:27:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:23.155 15:27:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.155 15:27:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:23.155 15:27:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:23.456 15:27:02 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:23.456 15:27:02 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:23.456 15:27:02 -- setup/devices.sh@68 -- # return 0 00:03:23.456 15:27:02 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:23.456 15:27:02 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:23.456 15:27:02 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:23.456 15:27:02 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:23.456 15:27:02 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:23.456 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:23.456 00:03:23.456 real 0m6.191s 00:03:23.456 user 0m1.431s 00:03:23.456 sys 0m2.366s 00:03:23.456 15:27:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:23.456 15:27:02 -- common/autotest_common.sh@10 -- # set +x 00:03:23.456 ************************************ 00:03:23.456 END TEST nvme_mount 00:03:23.456 ************************************ 00:03:23.456 15:27:02 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:23.456 15:27:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:23.456 15:27:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:23.456 15:27:02 -- common/autotest_common.sh@10 -- # set +x 00:03:23.456 ************************************ 00:03:23.456 START TEST dm_mount 00:03:23.456 ************************************ 00:03:23.456 15:27:02 -- common/autotest_common.sh@1104 -- # dm_mount 00:03:23.456 15:27:02 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:23.456 15:27:02 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:23.456 15:27:02 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:23.456 15:27:02 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:23.456 15:27:02 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:23.456 15:27:02 -- setup/common.sh@40 -- # local part_no=2 00:03:23.457 15:27:02 -- setup/common.sh@41 -- # local size=1073741824 00:03:23.457 15:27:02 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:23.457 15:27:02 -- setup/common.sh@44 -- # parts=() 00:03:23.457 15:27:02 -- setup/common.sh@44 -- # local parts 00:03:23.457 15:27:02 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:23.457 15:27:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:23.457 15:27:02 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:23.457 15:27:02 -- setup/common.sh@46 -- # (( part++ )) 00:03:23.457 15:27:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:23.457 15:27:02 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:23.457 15:27:02 -- setup/common.sh@46 -- # (( part++ )) 00:03:23.457 15:27:02 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:23.457 15:27:02 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:23.457 15:27:02 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:23.457 15:27:02 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:24.419 Creating new GPT entries in memory. 00:03:24.419 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:24.419 other utilities. 00:03:24.419 15:27:03 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:24.419 15:27:03 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:24.419 15:27:03 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:24.419 15:27:03 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:24.419 15:27:03 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:25.352 Creating new GPT entries in memory. 00:03:25.352 The operation has completed successfully. 00:03:25.352 15:27:04 -- setup/common.sh@57 -- # (( part++ )) 00:03:25.352 15:27:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:25.352 15:27:04 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:25.352 15:27:04 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:25.352 15:27:04 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:26.726 The operation has completed successfully. 00:03:26.726 15:27:05 -- setup/common.sh@57 -- # (( part++ )) 00:03:26.726 15:27:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:26.726 15:27:05 -- setup/common.sh@62 -- # wait 1983474 00:03:26.726 15:27:05 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:26.726 15:27:05 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:26.726 15:27:05 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:26.726 15:27:05 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:26.726 15:27:05 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:26.726 15:27:05 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:26.726 15:27:05 -- setup/devices.sh@161 -- # break 00:03:26.726 15:27:05 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:26.726 15:27:05 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:26.726 15:27:05 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:26.726 15:27:05 -- setup/devices.sh@166 -- # dm=dm-0 00:03:26.726 15:27:05 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:26.726 15:27:05 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:26.726 15:27:05 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:26.726 15:27:05 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:26.726 15:27:05 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:26.726 15:27:05 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:26.726 15:27:05 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:26.726 15:27:05 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:26.726 15:27:05 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:26.726 15:27:05 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:26.726 15:27:05 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:26.726 15:27:05 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:26.726 15:27:05 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:26.726 15:27:05 -- setup/devices.sh@53 -- # local found=0 00:03:26.726 15:27:05 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:26.726 15:27:05 -- setup/devices.sh@56 -- # : 00:03:26.726 15:27:05 -- setup/devices.sh@59 -- # local pci status 00:03:26.726 15:27:05 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:26.726 15:27:05 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:26.726 15:27:05 -- setup/devices.sh@47 -- # setup output config 00:03:26.726 15:27:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.726 15:27:05 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:27.656 15:27:06 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.656 15:27:06 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:27.656 15:27:06 -- setup/devices.sh@63 -- # found=1 00:03:27.656 15:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.656 15:27:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.656 15:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.656 15:27:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.656 15:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.656 15:27:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.656 15:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.656 15:27:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.656 15:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.656 15:27:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.656 15:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.656 15:27:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.656 15:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.656 15:27:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.656 15:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.656 15:27:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.656 15:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.656 15:27:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.656 15:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.656 15:27:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.656 15:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.656 15:27:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.656 15:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.656 15:27:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.656 15:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.656 15:27:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.656 15:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.656 15:27:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.656 15:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.656 15:27:06 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.656 15:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.656 15:27:06 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:27.656 15:27:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.914 15:27:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:27.914 15:27:07 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:27.914 15:27:07 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:27.914 15:27:07 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:27.914 15:27:07 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:27.914 15:27:07 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:27.914 15:27:07 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:27.914 15:27:07 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:27.914 15:27:07 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:27.914 15:27:07 -- setup/devices.sh@50 -- # local mount_point= 00:03:27.914 15:27:07 -- setup/devices.sh@51 -- # local test_file= 00:03:27.914 15:27:07 -- setup/devices.sh@53 -- # local found=0 00:03:27.914 15:27:07 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:27.914 15:27:07 -- setup/devices.sh@59 -- # local pci status 00:03:27.914 15:27:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:27.914 15:27:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:27.914 15:27:07 -- setup/devices.sh@47 -- # setup output config 00:03:27.914 15:27:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.914 15:27:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:28.844 15:27:08 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.844 15:27:08 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:28.844 15:27:08 -- setup/devices.sh@63 -- # found=1 00:03:28.844 15:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.844 15:27:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.844 15:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.844 15:27:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.844 15:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.844 15:27:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.844 15:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.844 15:27:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.844 15:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.844 15:27:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.844 15:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.844 15:27:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.844 15:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.844 15:27:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.844 15:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.844 15:27:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.844 15:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.844 15:27:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.844 15:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.844 15:27:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.844 15:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.844 15:27:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.845 15:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.845 15:27:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.845 15:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.845 15:27:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.845 15:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.845 15:27:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.845 15:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.845 15:27:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.845 15:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:28.845 15:27:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:28.845 15:27:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:29.103 15:27:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:29.103 15:27:08 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:29.103 15:27:08 -- setup/devices.sh@68 -- # return 0 00:03:29.103 15:27:08 -- setup/devices.sh@187 -- # cleanup_dm 00:03:29.103 15:27:08 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:29.103 15:27:08 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:29.103 15:27:08 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:29.103 15:27:08 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:29.103 15:27:08 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:29.103 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:29.103 15:27:08 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:29.103 15:27:08 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:29.103 00:03:29.103 real 0m5.682s 00:03:29.103 user 0m0.941s 00:03:29.103 sys 0m1.619s 00:03:29.103 15:27:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.103 15:27:08 -- common/autotest_common.sh@10 -- # set +x 00:03:29.103 ************************************ 00:03:29.103 END TEST dm_mount 00:03:29.103 ************************************ 00:03:29.103 15:27:08 -- setup/devices.sh@1 -- # cleanup 00:03:29.103 15:27:08 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:29.103 15:27:08 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:29.103 15:27:08 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:29.103 15:27:08 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:29.103 15:27:08 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:29.103 15:27:08 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:29.361 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:29.361 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:29.361 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:29.361 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:29.361 15:27:08 -- setup/devices.sh@12 -- # cleanup_dm 00:03:29.361 15:27:08 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:29.361 15:27:08 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:29.361 15:27:08 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:29.361 15:27:08 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:29.361 15:27:08 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:29.361 15:27:08 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:29.361 00:03:29.361 real 0m13.743s 00:03:29.361 user 0m2.968s 00:03:29.361 sys 0m5.028s 00:03:29.361 15:27:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.361 15:27:08 -- common/autotest_common.sh@10 -- # set +x 00:03:29.361 ************************************ 00:03:29.361 END TEST devices 00:03:29.361 ************************************ 00:03:29.361 00:03:29.361 real 0m42.729s 00:03:29.361 user 0m11.967s 00:03:29.361 sys 0m19.099s 00:03:29.361 15:27:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.361 15:27:08 -- common/autotest_common.sh@10 -- # set +x 00:03:29.361 ************************************ 00:03:29.361 END TEST setup.sh 00:03:29.361 ************************************ 00:03:29.361 15:27:08 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:30.291 Hugepages 00:03:30.291 node hugesize free / total 00:03:30.291 node0 1048576kB 0 / 0 00:03:30.291 node0 2048kB 2048 / 2048 00:03:30.291 node1 1048576kB 0 / 0 00:03:30.291 node1 2048kB 0 / 0 00:03:30.291 00:03:30.291 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:30.291 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:30.291 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:30.291 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:30.291 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:30.291 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:30.291 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:30.291 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:30.291 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:30.291 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:30.291 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:30.291 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:30.291 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:30.291 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:30.292 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:30.292 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:30.292 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:30.549 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:30.549 15:27:09 -- spdk/autotest.sh@141 -- # uname -s 00:03:30.549 15:27:09 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:03:30.549 15:27:09 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:03:30.549 15:27:09 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:31.941 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:31.941 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:31.941 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:31.941 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:31.941 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:31.941 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:31.941 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:31.941 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:31.941 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:31.941 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:31.941 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:31.941 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:31.941 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:31.941 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:31.941 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:31.941 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:32.510 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:32.769 15:27:12 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:33.706 15:27:13 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:33.706 15:27:13 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:33.706 15:27:13 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:03:33.706 15:27:13 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:03:33.706 15:27:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:33.706 15:27:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:33.706 15:27:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:33.706 15:27:13 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:33.706 15:27:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:33.964 15:27:13 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:33.964 15:27:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:33.964 15:27:13 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:34.898 Waiting for block devices as requested 00:03:34.898 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:35.157 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:35.157 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:35.157 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:35.417 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:35.417 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:35.417 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:35.417 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:35.677 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:35.677 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:35.677 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:35.677 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:35.935 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:35.935 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:35.935 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:35.935 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:36.194 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:36.194 15:27:15 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:03:36.194 15:27:15 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:36.194 15:27:15 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:36.194 15:27:15 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:03:36.194 15:27:15 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:36.194 15:27:15 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:36.194 15:27:15 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:36.194 15:27:15 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:36.194 15:27:15 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:03:36.194 15:27:15 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:03:36.194 15:27:15 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:03:36.194 15:27:15 -- common/autotest_common.sh@1530 -- # grep oacs 00:03:36.194 15:27:15 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:03:36.194 15:27:15 -- common/autotest_common.sh@1530 -- # oacs=' 0xf' 00:03:36.194 15:27:15 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:03:36.194 15:27:15 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:03:36.194 15:27:15 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:03:36.194 15:27:15 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:03:36.194 15:27:15 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:03:36.194 15:27:15 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:03:36.194 15:27:15 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:03:36.194 15:27:15 -- common/autotest_common.sh@1542 -- # continue 00:03:36.194 15:27:15 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:03:36.194 15:27:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:36.194 15:27:15 -- common/autotest_common.sh@10 -- # set +x 00:03:36.194 15:27:15 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:03:36.194 15:27:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:36.194 15:27:15 -- common/autotest_common.sh@10 -- # set +x 00:03:36.194 15:27:15 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:37.569 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:37.569 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:37.569 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:37.569 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:37.569 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:37.569 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:37.569 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:37.569 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:37.569 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:37.569 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:37.569 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:37.569 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:37.569 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:37.569 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:37.569 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:37.569 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:38.505 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:38.505 15:27:17 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:03:38.505 15:27:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:38.505 15:27:17 -- common/autotest_common.sh@10 -- # set +x 00:03:38.505 15:27:17 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:03:38.505 15:27:17 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:03:38.506 15:27:17 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:03:38.506 15:27:17 -- common/autotest_common.sh@1562 -- # bdfs=() 00:03:38.506 15:27:17 -- common/autotest_common.sh@1562 -- # local bdfs 00:03:38.506 15:27:17 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:38.506 15:27:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:38.506 15:27:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:38.506 15:27:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:38.506 15:27:17 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:38.506 15:27:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:38.506 15:27:17 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:38.506 15:27:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:03:38.506 15:27:17 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:03:38.506 15:27:17 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:38.506 15:27:17 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:03:38.506 15:27:17 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:38.506 15:27:17 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:03:38.506 15:27:17 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:03:38.506 15:27:17 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:03:38.506 15:27:17 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1989274 00:03:38.506 15:27:17 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:38.506 15:27:17 -- common/autotest_common.sh@1583 -- # waitforlisten 1989274 00:03:38.506 15:27:17 -- common/autotest_common.sh@819 -- # '[' -z 1989274 ']' 00:03:38.506 15:27:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:38.506 15:27:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:38.506 15:27:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:38.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:38.506 15:27:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:38.506 15:27:17 -- common/autotest_common.sh@10 -- # set +x 00:03:38.764 [2024-07-10 15:27:17.898766] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:38.764 [2024-07-10 15:27:17.898844] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1989274 ] 00:03:38.764 EAL: No free 2048 kB hugepages reported on node 1 00:03:38.764 [2024-07-10 15:27:17.955314] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:38.764 [2024-07-10 15:27:18.060100] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:38.764 [2024-07-10 15:27:18.060258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:39.698 15:27:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:39.698 15:27:18 -- common/autotest_common.sh@852 -- # return 0 00:03:39.698 15:27:18 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:03:39.698 15:27:18 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:03:39.698 15:27:18 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:42.984 nvme0n1 00:03:42.984 15:27:21 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:42.984 [2024-07-10 15:27:22.159233] nvme_opal.c:2059:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:42.984 [2024-07-10 15:27:22.159275] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:42.984 request: 00:03:42.984 { 00:03:42.984 "nvme_ctrlr_name": "nvme0", 00:03:42.984 "password": "test", 00:03:42.984 "method": "bdev_nvme_opal_revert", 00:03:42.984 "req_id": 1 00:03:42.984 } 00:03:42.984 Got JSON-RPC error response 00:03:42.984 response: 00:03:42.984 { 00:03:42.984 "code": -32603, 00:03:42.984 "message": "Internal error" 00:03:42.984 } 00:03:42.984 15:27:22 -- common/autotest_common.sh@1589 -- # true 00:03:42.984 15:27:22 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:03:42.984 15:27:22 -- common/autotest_common.sh@1593 -- # killprocess 1989274 00:03:42.984 15:27:22 -- common/autotest_common.sh@926 -- # '[' -z 1989274 ']' 00:03:42.984 15:27:22 -- common/autotest_common.sh@930 -- # kill -0 1989274 00:03:42.984 15:27:22 -- common/autotest_common.sh@931 -- # uname 00:03:42.984 15:27:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:03:42.984 15:27:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1989274 00:03:42.984 15:27:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:03:42.984 15:27:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:03:42.984 15:27:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1989274' 00:03:42.984 killing process with pid 1989274 00:03:42.984 15:27:22 -- common/autotest_common.sh@945 -- # kill 1989274 00:03:42.985 15:27:22 -- common/autotest_common.sh@950 -- # wait 1989274 00:03:44.884 15:27:23 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:03:44.884 15:27:23 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:03:44.884 15:27:23 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:03:44.884 15:27:23 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:03:44.884 15:27:23 -- spdk/autotest.sh@173 -- # timing_enter lib 00:03:44.884 15:27:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:44.884 15:27:23 -- common/autotest_common.sh@10 -- # set +x 00:03:44.884 15:27:23 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:44.884 15:27:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:44.884 15:27:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:44.884 15:27:23 -- common/autotest_common.sh@10 -- # set +x 00:03:44.884 ************************************ 00:03:44.884 START TEST env 00:03:44.884 ************************************ 00:03:44.884 15:27:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:44.884 * Looking for test storage... 00:03:44.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:44.884 15:27:24 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:44.884 15:27:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:44.884 15:27:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:44.884 15:27:24 -- common/autotest_common.sh@10 -- # set +x 00:03:44.884 ************************************ 00:03:44.884 START TEST env_memory 00:03:44.884 ************************************ 00:03:44.885 15:27:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:44.885 00:03:44.885 00:03:44.885 CUnit - A unit testing framework for C - Version 2.1-3 00:03:44.885 http://cunit.sourceforge.net/ 00:03:44.885 00:03:44.885 00:03:44.885 Suite: memory 00:03:44.885 Test: alloc and free memory map ...[2024-07-10 15:27:24.078664] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:44.885 passed 00:03:44.885 Test: mem map translation ...[2024-07-10 15:27:24.099262] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:44.885 [2024-07-10 15:27:24.099283] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:44.885 [2024-07-10 15:27:24.099339] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:44.885 [2024-07-10 15:27:24.099352] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:44.885 passed 00:03:44.885 Test: mem map registration ...[2024-07-10 15:27:24.140245] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:44.885 [2024-07-10 15:27:24.140264] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:44.885 passed 00:03:44.885 Test: mem map adjacent registrations ...passed 00:03:44.885 00:03:44.885 Run Summary: Type Total Ran Passed Failed Inactive 00:03:44.885 suites 1 1 n/a 0 0 00:03:44.885 tests 4 4 4 0 0 00:03:44.885 asserts 152 152 152 0 n/a 00:03:44.885 00:03:44.885 Elapsed time = 0.142 seconds 00:03:44.885 00:03:44.885 real 0m0.150s 00:03:44.885 user 0m0.144s 00:03:44.885 sys 0m0.005s 00:03:44.885 15:27:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.885 15:27:24 -- common/autotest_common.sh@10 -- # set +x 00:03:44.885 ************************************ 00:03:44.885 END TEST env_memory 00:03:44.885 ************************************ 00:03:44.885 15:27:24 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:44.885 15:27:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:44.885 15:27:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:44.885 15:27:24 -- common/autotest_common.sh@10 -- # set +x 00:03:44.885 ************************************ 00:03:44.885 START TEST env_vtophys 00:03:44.885 ************************************ 00:03:44.885 15:27:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:44.885 EAL: lib.eal log level changed from notice to debug 00:03:44.885 EAL: Detected lcore 0 as core 0 on socket 0 00:03:44.885 EAL: Detected lcore 1 as core 1 on socket 0 00:03:44.885 EAL: Detected lcore 2 as core 2 on socket 0 00:03:44.885 EAL: Detected lcore 3 as core 3 on socket 0 00:03:44.885 EAL: Detected lcore 4 as core 4 on socket 0 00:03:44.885 EAL: Detected lcore 5 as core 5 on socket 0 00:03:44.885 EAL: Detected lcore 6 as core 8 on socket 0 00:03:44.885 EAL: Detected lcore 7 as core 9 on socket 0 00:03:44.885 EAL: Detected lcore 8 as core 10 on socket 0 00:03:44.885 EAL: Detected lcore 9 as core 11 on socket 0 00:03:44.885 EAL: Detected lcore 10 as core 12 on socket 0 00:03:44.885 EAL: Detected lcore 11 as core 13 on socket 0 00:03:44.885 EAL: Detected lcore 12 as core 0 on socket 1 00:03:44.885 EAL: Detected lcore 13 as core 1 on socket 1 00:03:44.885 EAL: Detected lcore 14 as core 2 on socket 1 00:03:44.885 EAL: Detected lcore 15 as core 3 on socket 1 00:03:44.885 EAL: Detected lcore 16 as core 4 on socket 1 00:03:44.885 EAL: Detected lcore 17 as core 5 on socket 1 00:03:44.885 EAL: Detected lcore 18 as core 8 on socket 1 00:03:44.885 EAL: Detected lcore 19 as core 9 on socket 1 00:03:44.885 EAL: Detected lcore 20 as core 10 on socket 1 00:03:44.885 EAL: Detected lcore 21 as core 11 on socket 1 00:03:44.885 EAL: Detected lcore 22 as core 12 on socket 1 00:03:44.885 EAL: Detected lcore 23 as core 13 on socket 1 00:03:44.885 EAL: Detected lcore 24 as core 0 on socket 0 00:03:44.885 EAL: Detected lcore 25 as core 1 on socket 0 00:03:44.885 EAL: Detected lcore 26 as core 2 on socket 0 00:03:44.885 EAL: Detected lcore 27 as core 3 on socket 0 00:03:44.885 EAL: Detected lcore 28 as core 4 on socket 0 00:03:44.885 EAL: Detected lcore 29 as core 5 on socket 0 00:03:44.885 EAL: Detected lcore 30 as core 8 on socket 0 00:03:44.885 EAL: Detected lcore 31 as core 9 on socket 0 00:03:44.885 EAL: Detected lcore 32 as core 10 on socket 0 00:03:44.885 EAL: Detected lcore 33 as core 11 on socket 0 00:03:44.885 EAL: Detected lcore 34 as core 12 on socket 0 00:03:44.885 EAL: Detected lcore 35 as core 13 on socket 0 00:03:44.885 EAL: Detected lcore 36 as core 0 on socket 1 00:03:44.885 EAL: Detected lcore 37 as core 1 on socket 1 00:03:44.885 EAL: Detected lcore 38 as core 2 on socket 1 00:03:44.885 EAL: Detected lcore 39 as core 3 on socket 1 00:03:44.885 EAL: Detected lcore 40 as core 4 on socket 1 00:03:44.885 EAL: Detected lcore 41 as core 5 on socket 1 00:03:44.885 EAL: Detected lcore 42 as core 8 on socket 1 00:03:44.885 EAL: Detected lcore 43 as core 9 on socket 1 00:03:44.885 EAL: Detected lcore 44 as core 10 on socket 1 00:03:44.885 EAL: Detected lcore 45 as core 11 on socket 1 00:03:44.885 EAL: Detected lcore 46 as core 12 on socket 1 00:03:44.885 EAL: Detected lcore 47 as core 13 on socket 1 00:03:44.885 EAL: Maximum logical cores by configuration: 128 00:03:44.885 EAL: Detected CPU lcores: 48 00:03:44.885 EAL: Detected NUMA nodes: 2 00:03:44.885 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:44.885 EAL: Detected shared linkage of DPDK 00:03:44.885 EAL: No shared files mode enabled, IPC will be disabled 00:03:44.885 EAL: Bus pci wants IOVA as 'DC' 00:03:44.885 EAL: Buses did not request a specific IOVA mode. 00:03:45.144 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:45.144 EAL: Selected IOVA mode 'VA' 00:03:45.144 EAL: No free 2048 kB hugepages reported on node 1 00:03:45.144 EAL: Probing VFIO support... 00:03:45.144 EAL: IOMMU type 1 (Type 1) is supported 00:03:45.144 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:45.144 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:45.144 EAL: VFIO support initialized 00:03:45.144 EAL: Ask a virtual area of 0x2e000 bytes 00:03:45.144 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:45.144 EAL: Setting up physically contiguous memory... 00:03:45.144 EAL: Setting maximum number of open files to 524288 00:03:45.144 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:45.144 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:45.144 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:45.144 EAL: Ask a virtual area of 0x61000 bytes 00:03:45.144 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:45.144 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:45.144 EAL: Ask a virtual area of 0x400000000 bytes 00:03:45.144 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:45.144 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:45.144 EAL: Ask a virtual area of 0x61000 bytes 00:03:45.144 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:45.144 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:45.145 EAL: Ask a virtual area of 0x400000000 bytes 00:03:45.145 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:45.145 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:45.145 EAL: Ask a virtual area of 0x61000 bytes 00:03:45.145 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:45.145 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:45.145 EAL: Ask a virtual area of 0x400000000 bytes 00:03:45.145 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:45.145 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:45.145 EAL: Ask a virtual area of 0x61000 bytes 00:03:45.145 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:45.145 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:45.145 EAL: Ask a virtual area of 0x400000000 bytes 00:03:45.145 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:45.145 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:45.145 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:45.145 EAL: Ask a virtual area of 0x61000 bytes 00:03:45.145 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:45.145 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:45.145 EAL: Ask a virtual area of 0x400000000 bytes 00:03:45.145 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:45.145 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:45.145 EAL: Ask a virtual area of 0x61000 bytes 00:03:45.145 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:45.145 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:45.145 EAL: Ask a virtual area of 0x400000000 bytes 00:03:45.145 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:45.145 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:45.145 EAL: Ask a virtual area of 0x61000 bytes 00:03:45.145 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:45.145 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:45.145 EAL: Ask a virtual area of 0x400000000 bytes 00:03:45.145 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:45.145 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:45.145 EAL: Ask a virtual area of 0x61000 bytes 00:03:45.145 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:45.145 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:45.145 EAL: Ask a virtual area of 0x400000000 bytes 00:03:45.145 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:45.145 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:45.145 EAL: Hugepages will be freed exactly as allocated. 00:03:45.145 EAL: No shared files mode enabled, IPC is disabled 00:03:45.145 EAL: No shared files mode enabled, IPC is disabled 00:03:45.145 EAL: TSC frequency is ~2700000 KHz 00:03:45.145 EAL: Main lcore 0 is ready (tid=7fa085b44a00;cpuset=[0]) 00:03:45.145 EAL: Trying to obtain current memory policy. 00:03:45.145 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:45.145 EAL: Restoring previous memory policy: 0 00:03:45.145 EAL: request: mp_malloc_sync 00:03:45.145 EAL: No shared files mode enabled, IPC is disabled 00:03:45.145 EAL: Heap on socket 0 was expanded by 2MB 00:03:45.145 EAL: No shared files mode enabled, IPC is disabled 00:03:45.145 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:45.145 EAL: Mem event callback 'spdk:(nil)' registered 00:03:45.145 00:03:45.145 00:03:45.145 CUnit - A unit testing framework for C - Version 2.1-3 00:03:45.145 http://cunit.sourceforge.net/ 00:03:45.145 00:03:45.145 00:03:45.145 Suite: components_suite 00:03:45.145 Test: vtophys_malloc_test ...passed 00:03:45.145 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:45.145 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:45.145 EAL: Restoring previous memory policy: 4 00:03:45.145 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.145 EAL: request: mp_malloc_sync 00:03:45.145 EAL: No shared files mode enabled, IPC is disabled 00:03:45.145 EAL: Heap on socket 0 was expanded by 4MB 00:03:45.145 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.145 EAL: request: mp_malloc_sync 00:03:45.145 EAL: No shared files mode enabled, IPC is disabled 00:03:45.145 EAL: Heap on socket 0 was shrunk by 4MB 00:03:45.145 EAL: Trying to obtain current memory policy. 00:03:45.145 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:45.145 EAL: Restoring previous memory policy: 4 00:03:45.145 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.145 EAL: request: mp_malloc_sync 00:03:45.145 EAL: No shared files mode enabled, IPC is disabled 00:03:45.145 EAL: Heap on socket 0 was expanded by 6MB 00:03:45.145 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.145 EAL: request: mp_malloc_sync 00:03:45.145 EAL: No shared files mode enabled, IPC is disabled 00:03:45.145 EAL: Heap on socket 0 was shrunk by 6MB 00:03:45.145 EAL: Trying to obtain current memory policy. 00:03:45.145 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:45.145 EAL: Restoring previous memory policy: 4 00:03:45.145 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.145 EAL: request: mp_malloc_sync 00:03:45.145 EAL: No shared files mode enabled, IPC is disabled 00:03:45.145 EAL: Heap on socket 0 was expanded by 10MB 00:03:45.145 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.145 EAL: request: mp_malloc_sync 00:03:45.145 EAL: No shared files mode enabled, IPC is disabled 00:03:45.145 EAL: Heap on socket 0 was shrunk by 10MB 00:03:45.145 EAL: Trying to obtain current memory policy. 00:03:45.145 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:45.145 EAL: Restoring previous memory policy: 4 00:03:45.145 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.145 EAL: request: mp_malloc_sync 00:03:45.145 EAL: No shared files mode enabled, IPC is disabled 00:03:45.145 EAL: Heap on socket 0 was expanded by 18MB 00:03:45.145 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.145 EAL: request: mp_malloc_sync 00:03:45.145 EAL: No shared files mode enabled, IPC is disabled 00:03:45.145 EAL: Heap on socket 0 was shrunk by 18MB 00:03:45.145 EAL: Trying to obtain current memory policy. 00:03:45.145 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:45.145 EAL: Restoring previous memory policy: 4 00:03:45.145 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.145 EAL: request: mp_malloc_sync 00:03:45.145 EAL: No shared files mode enabled, IPC is disabled 00:03:45.145 EAL: Heap on socket 0 was expanded by 34MB 00:03:45.145 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.145 EAL: request: mp_malloc_sync 00:03:45.145 EAL: No shared files mode enabled, IPC is disabled 00:03:45.145 EAL: Heap on socket 0 was shrunk by 34MB 00:03:45.145 EAL: Trying to obtain current memory policy. 00:03:45.145 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:45.145 EAL: Restoring previous memory policy: 4 00:03:45.145 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.145 EAL: request: mp_malloc_sync 00:03:45.145 EAL: No shared files mode enabled, IPC is disabled 00:03:45.145 EAL: Heap on socket 0 was expanded by 66MB 00:03:45.145 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.145 EAL: request: mp_malloc_sync 00:03:45.145 EAL: No shared files mode enabled, IPC is disabled 00:03:45.145 EAL: Heap on socket 0 was shrunk by 66MB 00:03:45.145 EAL: Trying to obtain current memory policy. 00:03:45.145 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:45.145 EAL: Restoring previous memory policy: 4 00:03:45.145 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.145 EAL: request: mp_malloc_sync 00:03:45.145 EAL: No shared files mode enabled, IPC is disabled 00:03:45.145 EAL: Heap on socket 0 was expanded by 130MB 00:03:45.145 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.145 EAL: request: mp_malloc_sync 00:03:45.145 EAL: No shared files mode enabled, IPC is disabled 00:03:45.145 EAL: Heap on socket 0 was shrunk by 130MB 00:03:45.145 EAL: Trying to obtain current memory policy. 00:03:45.145 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:45.145 EAL: Restoring previous memory policy: 4 00:03:45.145 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.145 EAL: request: mp_malloc_sync 00:03:45.145 EAL: No shared files mode enabled, IPC is disabled 00:03:45.145 EAL: Heap on socket 0 was expanded by 258MB 00:03:45.404 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.404 EAL: request: mp_malloc_sync 00:03:45.404 EAL: No shared files mode enabled, IPC is disabled 00:03:45.404 EAL: Heap on socket 0 was shrunk by 258MB 00:03:45.404 EAL: Trying to obtain current memory policy. 00:03:45.404 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:45.404 EAL: Restoring previous memory policy: 4 00:03:45.404 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.404 EAL: request: mp_malloc_sync 00:03:45.404 EAL: No shared files mode enabled, IPC is disabled 00:03:45.404 EAL: Heap on socket 0 was expanded by 514MB 00:03:45.662 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.662 EAL: request: mp_malloc_sync 00:03:45.662 EAL: No shared files mode enabled, IPC is disabled 00:03:45.662 EAL: Heap on socket 0 was shrunk by 514MB 00:03:45.662 EAL: Trying to obtain current memory policy. 00:03:45.662 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:45.921 EAL: Restoring previous memory policy: 4 00:03:45.921 EAL: Calling mem event callback 'spdk:(nil)' 00:03:45.921 EAL: request: mp_malloc_sync 00:03:45.921 EAL: No shared files mode enabled, IPC is disabled 00:03:45.921 EAL: Heap on socket 0 was expanded by 1026MB 00:03:46.178 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.437 EAL: request: mp_malloc_sync 00:03:46.437 EAL: No shared files mode enabled, IPC is disabled 00:03:46.437 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:46.437 passed 00:03:46.437 00:03:46.437 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.437 suites 1 1 n/a 0 0 00:03:46.437 tests 2 2 2 0 0 00:03:46.437 asserts 497 497 497 0 n/a 00:03:46.437 00:03:46.437 Elapsed time = 1.366 seconds 00:03:46.437 EAL: Calling mem event callback 'spdk:(nil)' 00:03:46.437 EAL: request: mp_malloc_sync 00:03:46.437 EAL: No shared files mode enabled, IPC is disabled 00:03:46.437 EAL: Heap on socket 0 was shrunk by 2MB 00:03:46.437 EAL: No shared files mode enabled, IPC is disabled 00:03:46.437 EAL: No shared files mode enabled, IPC is disabled 00:03:46.437 EAL: No shared files mode enabled, IPC is disabled 00:03:46.437 00:03:46.437 real 0m1.481s 00:03:46.437 user 0m0.847s 00:03:46.437 sys 0m0.603s 00:03:46.437 15:27:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.437 15:27:25 -- common/autotest_common.sh@10 -- # set +x 00:03:46.437 ************************************ 00:03:46.437 END TEST env_vtophys 00:03:46.437 ************************************ 00:03:46.437 15:27:25 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:46.437 15:27:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:46.437 15:27:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:46.437 15:27:25 -- common/autotest_common.sh@10 -- # set +x 00:03:46.437 ************************************ 00:03:46.437 START TEST env_pci 00:03:46.437 ************************************ 00:03:46.437 15:27:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:46.437 00:03:46.437 00:03:46.437 CUnit - A unit testing framework for C - Version 2.1-3 00:03:46.437 http://cunit.sourceforge.net/ 00:03:46.437 00:03:46.437 00:03:46.437 Suite: pci 00:03:46.437 Test: pci_hook ...[2024-07-10 15:27:25.734820] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1990312 has claimed it 00:03:46.437 EAL: Cannot find device (10000:00:01.0) 00:03:46.437 EAL: Failed to attach device on primary process 00:03:46.437 passed 00:03:46.437 00:03:46.437 Run Summary: Type Total Ran Passed Failed Inactive 00:03:46.437 suites 1 1 n/a 0 0 00:03:46.437 tests 1 1 1 0 0 00:03:46.437 asserts 25 25 25 0 n/a 00:03:46.437 00:03:46.437 Elapsed time = 0.020 seconds 00:03:46.437 00:03:46.437 real 0m0.032s 00:03:46.437 user 0m0.007s 00:03:46.437 sys 0m0.025s 00:03:46.437 15:27:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.437 15:27:25 -- common/autotest_common.sh@10 -- # set +x 00:03:46.437 ************************************ 00:03:46.437 END TEST env_pci 00:03:46.437 ************************************ 00:03:46.437 15:27:25 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:46.437 15:27:25 -- env/env.sh@15 -- # uname 00:03:46.437 15:27:25 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:46.437 15:27:25 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:46.437 15:27:25 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:46.437 15:27:25 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:03:46.437 15:27:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:46.437 15:27:25 -- common/autotest_common.sh@10 -- # set +x 00:03:46.437 ************************************ 00:03:46.437 START TEST env_dpdk_post_init 00:03:46.437 ************************************ 00:03:46.437 15:27:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:46.437 EAL: Detected CPU lcores: 48 00:03:46.437 EAL: Detected NUMA nodes: 2 00:03:46.437 EAL: Detected shared linkage of DPDK 00:03:46.437 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:46.696 EAL: Selected IOVA mode 'VA' 00:03:46.696 EAL: No free 2048 kB hugepages reported on node 1 00:03:46.696 EAL: VFIO support initialized 00:03:46.696 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:46.696 EAL: Using IOMMU type 1 (Type 1) 00:03:46.696 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:03:46.696 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:03:46.696 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:03:46.696 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:03:46.696 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:03:46.696 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:03:46.696 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:03:46.696 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:03:46.696 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:03:46.696 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:03:46.696 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:03:46.696 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:03:46.696 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:03:46.696 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:03:46.956 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:03:46.956 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:03:47.524 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:03:50.806 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:03:50.806 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:03:51.064 Starting DPDK initialization... 00:03:51.064 Starting SPDK post initialization... 00:03:51.064 SPDK NVMe probe 00:03:51.064 Attaching to 0000:88:00.0 00:03:51.064 Attached to 0000:88:00.0 00:03:51.064 Cleaning up... 00:03:51.064 00:03:51.064 real 0m4.406s 00:03:51.064 user 0m3.281s 00:03:51.064 sys 0m0.176s 00:03:51.064 15:27:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.064 15:27:30 -- common/autotest_common.sh@10 -- # set +x 00:03:51.064 ************************************ 00:03:51.064 END TEST env_dpdk_post_init 00:03:51.064 ************************************ 00:03:51.064 15:27:30 -- env/env.sh@26 -- # uname 00:03:51.064 15:27:30 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:51.064 15:27:30 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:51.064 15:27:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:51.064 15:27:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:51.064 15:27:30 -- common/autotest_common.sh@10 -- # set +x 00:03:51.064 ************************************ 00:03:51.064 START TEST env_mem_callbacks 00:03:51.064 ************************************ 00:03:51.064 15:27:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:51.064 EAL: Detected CPU lcores: 48 00:03:51.064 EAL: Detected NUMA nodes: 2 00:03:51.064 EAL: Detected shared linkage of DPDK 00:03:51.064 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:51.064 EAL: Selected IOVA mode 'VA' 00:03:51.064 EAL: No free 2048 kB hugepages reported on node 1 00:03:51.064 EAL: VFIO support initialized 00:03:51.064 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:51.064 00:03:51.064 00:03:51.064 CUnit - A unit testing framework for C - Version 2.1-3 00:03:51.064 http://cunit.sourceforge.net/ 00:03:51.064 00:03:51.064 00:03:51.064 Suite: memory 00:03:51.064 Test: test ... 00:03:51.064 register 0x200000200000 2097152 00:03:51.064 malloc 3145728 00:03:51.064 register 0x200000400000 4194304 00:03:51.064 buf 0x200000500000 len 3145728 PASSED 00:03:51.064 malloc 64 00:03:51.064 buf 0x2000004fff40 len 64 PASSED 00:03:51.064 malloc 4194304 00:03:51.064 register 0x200000800000 6291456 00:03:51.064 buf 0x200000a00000 len 4194304 PASSED 00:03:51.064 free 0x200000500000 3145728 00:03:51.064 free 0x2000004fff40 64 00:03:51.064 unregister 0x200000400000 4194304 PASSED 00:03:51.064 free 0x200000a00000 4194304 00:03:51.064 unregister 0x200000800000 6291456 PASSED 00:03:51.064 malloc 8388608 00:03:51.065 register 0x200000400000 10485760 00:03:51.065 buf 0x200000600000 len 8388608 PASSED 00:03:51.065 free 0x200000600000 8388608 00:03:51.065 unregister 0x200000400000 10485760 PASSED 00:03:51.065 passed 00:03:51.065 00:03:51.065 Run Summary: Type Total Ran Passed Failed Inactive 00:03:51.065 suites 1 1 n/a 0 0 00:03:51.065 tests 1 1 1 0 0 00:03:51.065 asserts 15 15 15 0 n/a 00:03:51.065 00:03:51.065 Elapsed time = 0.005 seconds 00:03:51.065 00:03:51.065 real 0m0.049s 00:03:51.065 user 0m0.019s 00:03:51.065 sys 0m0.029s 00:03:51.065 15:27:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.065 15:27:30 -- common/autotest_common.sh@10 -- # set +x 00:03:51.065 ************************************ 00:03:51.065 END TEST env_mem_callbacks 00:03:51.065 ************************************ 00:03:51.065 00:03:51.065 real 0m6.296s 00:03:51.065 user 0m4.365s 00:03:51.065 sys 0m0.974s 00:03:51.065 15:27:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:51.065 15:27:30 -- common/autotest_common.sh@10 -- # set +x 00:03:51.065 ************************************ 00:03:51.065 END TEST env 00:03:51.065 ************************************ 00:03:51.065 15:27:30 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:51.065 15:27:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:51.065 15:27:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:51.065 15:27:30 -- common/autotest_common.sh@10 -- # set +x 00:03:51.065 ************************************ 00:03:51.065 START TEST rpc 00:03:51.065 ************************************ 00:03:51.065 15:27:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:51.065 * Looking for test storage... 00:03:51.065 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:51.065 15:27:30 -- rpc/rpc.sh@65 -- # spdk_pid=1990973 00:03:51.065 15:27:30 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:51.065 15:27:30 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:51.065 15:27:30 -- rpc/rpc.sh@67 -- # waitforlisten 1990973 00:03:51.065 15:27:30 -- common/autotest_common.sh@819 -- # '[' -z 1990973 ']' 00:03:51.065 15:27:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:51.065 15:27:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:51.065 15:27:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:51.065 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:51.065 15:27:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:51.065 15:27:30 -- common/autotest_common.sh@10 -- # set +x 00:03:51.065 [2024-07-10 15:27:30.411367] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:51.065 [2024-07-10 15:27:30.411473] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1990973 ] 00:03:51.065 EAL: No free 2048 kB hugepages reported on node 1 00:03:51.324 [2024-07-10 15:27:30.468212] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:51.324 [2024-07-10 15:27:30.571685] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:51.325 [2024-07-10 15:27:30.571827] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:51.325 [2024-07-10 15:27:30.571845] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1990973' to capture a snapshot of events at runtime. 00:03:51.325 [2024-07-10 15:27:30.571861] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1990973 for offline analysis/debug. 00:03:51.325 [2024-07-10 15:27:30.571891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:52.260 15:27:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:52.260 15:27:31 -- common/autotest_common.sh@852 -- # return 0 00:03:52.260 15:27:31 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:52.260 15:27:31 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:52.260 15:27:31 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:52.260 15:27:31 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:52.260 15:27:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:52.260 15:27:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:52.260 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.260 ************************************ 00:03:52.260 START TEST rpc_integrity 00:03:52.260 ************************************ 00:03:52.260 15:27:31 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:03:52.260 15:27:31 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:52.260 15:27:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.260 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.260 15:27:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.260 15:27:31 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:52.260 15:27:31 -- rpc/rpc.sh@13 -- # jq length 00:03:52.260 15:27:31 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:52.260 15:27:31 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:52.260 15:27:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.260 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.260 15:27:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.260 15:27:31 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:52.260 15:27:31 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:52.260 15:27:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.260 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.260 15:27:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.260 15:27:31 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:52.260 { 00:03:52.260 "name": "Malloc0", 00:03:52.260 "aliases": [ 00:03:52.260 "016c57c1-2a09-4e67-a722-cd9516a3cc82" 00:03:52.260 ], 00:03:52.260 "product_name": "Malloc disk", 00:03:52.261 "block_size": 512, 00:03:52.261 "num_blocks": 16384, 00:03:52.261 "uuid": "016c57c1-2a09-4e67-a722-cd9516a3cc82", 00:03:52.261 "assigned_rate_limits": { 00:03:52.261 "rw_ios_per_sec": 0, 00:03:52.261 "rw_mbytes_per_sec": 0, 00:03:52.261 "r_mbytes_per_sec": 0, 00:03:52.261 "w_mbytes_per_sec": 0 00:03:52.261 }, 00:03:52.261 "claimed": false, 00:03:52.261 "zoned": false, 00:03:52.261 "supported_io_types": { 00:03:52.261 "read": true, 00:03:52.261 "write": true, 00:03:52.261 "unmap": true, 00:03:52.261 "write_zeroes": true, 00:03:52.261 "flush": true, 00:03:52.261 "reset": true, 00:03:52.261 "compare": false, 00:03:52.261 "compare_and_write": false, 00:03:52.261 "abort": true, 00:03:52.261 "nvme_admin": false, 00:03:52.261 "nvme_io": false 00:03:52.261 }, 00:03:52.261 "memory_domains": [ 00:03:52.261 { 00:03:52.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:52.261 "dma_device_type": 2 00:03:52.261 } 00:03:52.261 ], 00:03:52.261 "driver_specific": {} 00:03:52.261 } 00:03:52.261 ]' 00:03:52.261 15:27:31 -- rpc/rpc.sh@17 -- # jq length 00:03:52.261 15:27:31 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:52.261 15:27:31 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:52.261 15:27:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.261 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.261 [2024-07-10 15:27:31.457256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:52.261 [2024-07-10 15:27:31.457305] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:52.261 [2024-07-10 15:27:31.457329] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2441f70 00:03:52.261 [2024-07-10 15:27:31.457345] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:52.261 [2024-07-10 15:27:31.458895] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:52.261 [2024-07-10 15:27:31.458924] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:52.261 Passthru0 00:03:52.261 15:27:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.261 15:27:31 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:52.261 15:27:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.261 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.261 15:27:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.261 15:27:31 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:52.261 { 00:03:52.261 "name": "Malloc0", 00:03:52.261 "aliases": [ 00:03:52.261 "016c57c1-2a09-4e67-a722-cd9516a3cc82" 00:03:52.261 ], 00:03:52.261 "product_name": "Malloc disk", 00:03:52.261 "block_size": 512, 00:03:52.261 "num_blocks": 16384, 00:03:52.261 "uuid": "016c57c1-2a09-4e67-a722-cd9516a3cc82", 00:03:52.261 "assigned_rate_limits": { 00:03:52.261 "rw_ios_per_sec": 0, 00:03:52.261 "rw_mbytes_per_sec": 0, 00:03:52.261 "r_mbytes_per_sec": 0, 00:03:52.261 "w_mbytes_per_sec": 0 00:03:52.261 }, 00:03:52.261 "claimed": true, 00:03:52.261 "claim_type": "exclusive_write", 00:03:52.261 "zoned": false, 00:03:52.261 "supported_io_types": { 00:03:52.261 "read": true, 00:03:52.261 "write": true, 00:03:52.261 "unmap": true, 00:03:52.261 "write_zeroes": true, 00:03:52.261 "flush": true, 00:03:52.261 "reset": true, 00:03:52.261 "compare": false, 00:03:52.261 "compare_and_write": false, 00:03:52.261 "abort": true, 00:03:52.261 "nvme_admin": false, 00:03:52.261 "nvme_io": false 00:03:52.261 }, 00:03:52.261 "memory_domains": [ 00:03:52.261 { 00:03:52.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:52.261 "dma_device_type": 2 00:03:52.261 } 00:03:52.261 ], 00:03:52.261 "driver_specific": {} 00:03:52.261 }, 00:03:52.261 { 00:03:52.261 "name": "Passthru0", 00:03:52.261 "aliases": [ 00:03:52.261 "06b2f6f3-42cc-58b2-9753-600ac6a0cbca" 00:03:52.261 ], 00:03:52.261 "product_name": "passthru", 00:03:52.261 "block_size": 512, 00:03:52.261 "num_blocks": 16384, 00:03:52.261 "uuid": "06b2f6f3-42cc-58b2-9753-600ac6a0cbca", 00:03:52.261 "assigned_rate_limits": { 00:03:52.261 "rw_ios_per_sec": 0, 00:03:52.261 "rw_mbytes_per_sec": 0, 00:03:52.261 "r_mbytes_per_sec": 0, 00:03:52.261 "w_mbytes_per_sec": 0 00:03:52.261 }, 00:03:52.261 "claimed": false, 00:03:52.261 "zoned": false, 00:03:52.261 "supported_io_types": { 00:03:52.261 "read": true, 00:03:52.261 "write": true, 00:03:52.261 "unmap": true, 00:03:52.261 "write_zeroes": true, 00:03:52.261 "flush": true, 00:03:52.261 "reset": true, 00:03:52.261 "compare": false, 00:03:52.261 "compare_and_write": false, 00:03:52.261 "abort": true, 00:03:52.261 "nvme_admin": false, 00:03:52.261 "nvme_io": false 00:03:52.261 }, 00:03:52.261 "memory_domains": [ 00:03:52.261 { 00:03:52.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:52.261 "dma_device_type": 2 00:03:52.261 } 00:03:52.261 ], 00:03:52.261 "driver_specific": { 00:03:52.261 "passthru": { 00:03:52.261 "name": "Passthru0", 00:03:52.261 "base_bdev_name": "Malloc0" 00:03:52.261 } 00:03:52.261 } 00:03:52.261 } 00:03:52.261 ]' 00:03:52.261 15:27:31 -- rpc/rpc.sh@21 -- # jq length 00:03:52.261 15:27:31 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:52.261 15:27:31 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:52.261 15:27:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.261 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.261 15:27:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.261 15:27:31 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:52.261 15:27:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.261 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.261 15:27:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.261 15:27:31 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:52.261 15:27:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.261 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.261 15:27:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.261 15:27:31 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:52.261 15:27:31 -- rpc/rpc.sh@26 -- # jq length 00:03:52.261 15:27:31 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:52.261 00:03:52.261 real 0m0.223s 00:03:52.261 user 0m0.145s 00:03:52.261 sys 0m0.018s 00:03:52.261 15:27:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.261 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.261 ************************************ 00:03:52.261 END TEST rpc_integrity 00:03:52.261 ************************************ 00:03:52.261 15:27:31 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:52.261 15:27:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:52.261 15:27:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:52.261 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.261 ************************************ 00:03:52.261 START TEST rpc_plugins 00:03:52.261 ************************************ 00:03:52.261 15:27:31 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:03:52.261 15:27:31 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:52.261 15:27:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.261 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.261 15:27:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.261 15:27:31 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:52.261 15:27:31 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:52.261 15:27:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.261 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.261 15:27:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.261 15:27:31 -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:52.261 { 00:03:52.261 "name": "Malloc1", 00:03:52.261 "aliases": [ 00:03:52.261 "1e02f6ae-a825-4b70-8a3f-6779e5bfccc9" 00:03:52.261 ], 00:03:52.261 "product_name": "Malloc disk", 00:03:52.261 "block_size": 4096, 00:03:52.261 "num_blocks": 256, 00:03:52.261 "uuid": "1e02f6ae-a825-4b70-8a3f-6779e5bfccc9", 00:03:52.261 "assigned_rate_limits": { 00:03:52.261 "rw_ios_per_sec": 0, 00:03:52.261 "rw_mbytes_per_sec": 0, 00:03:52.261 "r_mbytes_per_sec": 0, 00:03:52.261 "w_mbytes_per_sec": 0 00:03:52.261 }, 00:03:52.261 "claimed": false, 00:03:52.261 "zoned": false, 00:03:52.261 "supported_io_types": { 00:03:52.261 "read": true, 00:03:52.261 "write": true, 00:03:52.261 "unmap": true, 00:03:52.261 "write_zeroes": true, 00:03:52.261 "flush": true, 00:03:52.261 "reset": true, 00:03:52.261 "compare": false, 00:03:52.261 "compare_and_write": false, 00:03:52.261 "abort": true, 00:03:52.261 "nvme_admin": false, 00:03:52.261 "nvme_io": false 00:03:52.261 }, 00:03:52.261 "memory_domains": [ 00:03:52.261 { 00:03:52.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:52.261 "dma_device_type": 2 00:03:52.261 } 00:03:52.261 ], 00:03:52.261 "driver_specific": {} 00:03:52.261 } 00:03:52.261 ]' 00:03:52.261 15:27:31 -- rpc/rpc.sh@32 -- # jq length 00:03:52.519 15:27:31 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:52.519 15:27:31 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:52.519 15:27:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.519 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.519 15:27:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.519 15:27:31 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:52.519 15:27:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.519 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.519 15:27:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.519 15:27:31 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:52.519 15:27:31 -- rpc/rpc.sh@36 -- # jq length 00:03:52.519 15:27:31 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:52.519 00:03:52.519 real 0m0.115s 00:03:52.519 user 0m0.074s 00:03:52.519 sys 0m0.011s 00:03:52.519 15:27:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.519 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.519 ************************************ 00:03:52.519 END TEST rpc_plugins 00:03:52.519 ************************************ 00:03:52.519 15:27:31 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:52.519 15:27:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:52.519 15:27:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:52.519 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.519 ************************************ 00:03:52.519 START TEST rpc_trace_cmd_test 00:03:52.519 ************************************ 00:03:52.519 15:27:31 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:03:52.519 15:27:31 -- rpc/rpc.sh@40 -- # local info 00:03:52.519 15:27:31 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:52.519 15:27:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.519 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.519 15:27:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.519 15:27:31 -- rpc/rpc.sh@42 -- # info='{ 00:03:52.519 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1990973", 00:03:52.519 "tpoint_group_mask": "0x8", 00:03:52.519 "iscsi_conn": { 00:03:52.519 "mask": "0x2", 00:03:52.519 "tpoint_mask": "0x0" 00:03:52.519 }, 00:03:52.519 "scsi": { 00:03:52.519 "mask": "0x4", 00:03:52.519 "tpoint_mask": "0x0" 00:03:52.519 }, 00:03:52.519 "bdev": { 00:03:52.519 "mask": "0x8", 00:03:52.519 "tpoint_mask": "0xffffffffffffffff" 00:03:52.519 }, 00:03:52.519 "nvmf_rdma": { 00:03:52.519 "mask": "0x10", 00:03:52.519 "tpoint_mask": "0x0" 00:03:52.519 }, 00:03:52.519 "nvmf_tcp": { 00:03:52.519 "mask": "0x20", 00:03:52.519 "tpoint_mask": "0x0" 00:03:52.519 }, 00:03:52.519 "ftl": { 00:03:52.519 "mask": "0x40", 00:03:52.519 "tpoint_mask": "0x0" 00:03:52.519 }, 00:03:52.519 "blobfs": { 00:03:52.519 "mask": "0x80", 00:03:52.519 "tpoint_mask": "0x0" 00:03:52.519 }, 00:03:52.519 "dsa": { 00:03:52.519 "mask": "0x200", 00:03:52.519 "tpoint_mask": "0x0" 00:03:52.519 }, 00:03:52.519 "thread": { 00:03:52.519 "mask": "0x400", 00:03:52.519 "tpoint_mask": "0x0" 00:03:52.519 }, 00:03:52.519 "nvme_pcie": { 00:03:52.519 "mask": "0x800", 00:03:52.519 "tpoint_mask": "0x0" 00:03:52.519 }, 00:03:52.519 "iaa": { 00:03:52.519 "mask": "0x1000", 00:03:52.519 "tpoint_mask": "0x0" 00:03:52.519 }, 00:03:52.519 "nvme_tcp": { 00:03:52.519 "mask": "0x2000", 00:03:52.519 "tpoint_mask": "0x0" 00:03:52.519 }, 00:03:52.519 "bdev_nvme": { 00:03:52.519 "mask": "0x4000", 00:03:52.519 "tpoint_mask": "0x0" 00:03:52.519 } 00:03:52.519 }' 00:03:52.519 15:27:31 -- rpc/rpc.sh@43 -- # jq length 00:03:52.519 15:27:31 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:03:52.519 15:27:31 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:52.519 15:27:31 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:52.519 15:27:31 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:52.519 15:27:31 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:52.519 15:27:31 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:52.777 15:27:31 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:52.778 15:27:31 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:52.778 15:27:31 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:52.778 00:03:52.778 real 0m0.199s 00:03:52.778 user 0m0.168s 00:03:52.778 sys 0m0.020s 00:03:52.778 15:27:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.778 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.778 ************************************ 00:03:52.778 END TEST rpc_trace_cmd_test 00:03:52.778 ************************************ 00:03:52.778 15:27:31 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:52.778 15:27:31 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:52.778 15:27:31 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:52.778 15:27:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:52.778 15:27:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:52.778 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.778 ************************************ 00:03:52.778 START TEST rpc_daemon_integrity 00:03:52.778 ************************************ 00:03:52.778 15:27:31 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:03:52.778 15:27:31 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:52.778 15:27:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.778 15:27:31 -- common/autotest_common.sh@10 -- # set +x 00:03:52.778 15:27:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.778 15:27:31 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:52.778 15:27:31 -- rpc/rpc.sh@13 -- # jq length 00:03:52.778 15:27:32 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:52.778 15:27:32 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:52.778 15:27:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.778 15:27:32 -- common/autotest_common.sh@10 -- # set +x 00:03:52.778 15:27:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.778 15:27:32 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:52.778 15:27:32 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:52.778 15:27:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.778 15:27:32 -- common/autotest_common.sh@10 -- # set +x 00:03:52.778 15:27:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.778 15:27:32 -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:52.778 { 00:03:52.778 "name": "Malloc2", 00:03:52.778 "aliases": [ 00:03:52.778 "fc8a62f4-9552-4297-9cab-65e334aabd5c" 00:03:52.778 ], 00:03:52.778 "product_name": "Malloc disk", 00:03:52.778 "block_size": 512, 00:03:52.778 "num_blocks": 16384, 00:03:52.778 "uuid": "fc8a62f4-9552-4297-9cab-65e334aabd5c", 00:03:52.778 "assigned_rate_limits": { 00:03:52.778 "rw_ios_per_sec": 0, 00:03:52.778 "rw_mbytes_per_sec": 0, 00:03:52.778 "r_mbytes_per_sec": 0, 00:03:52.778 "w_mbytes_per_sec": 0 00:03:52.778 }, 00:03:52.778 "claimed": false, 00:03:52.778 "zoned": false, 00:03:52.778 "supported_io_types": { 00:03:52.778 "read": true, 00:03:52.778 "write": true, 00:03:52.778 "unmap": true, 00:03:52.778 "write_zeroes": true, 00:03:52.778 "flush": true, 00:03:52.778 "reset": true, 00:03:52.778 "compare": false, 00:03:52.778 "compare_and_write": false, 00:03:52.778 "abort": true, 00:03:52.778 "nvme_admin": false, 00:03:52.778 "nvme_io": false 00:03:52.778 }, 00:03:52.778 "memory_domains": [ 00:03:52.778 { 00:03:52.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:52.778 "dma_device_type": 2 00:03:52.778 } 00:03:52.778 ], 00:03:52.778 "driver_specific": {} 00:03:52.778 } 00:03:52.778 ]' 00:03:52.778 15:27:32 -- rpc/rpc.sh@17 -- # jq length 00:03:52.778 15:27:32 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:52.778 15:27:32 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:52.778 15:27:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.778 15:27:32 -- common/autotest_common.sh@10 -- # set +x 00:03:52.778 [2024-07-10 15:27:32.071029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:52.778 [2024-07-10 15:27:32.071073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:52.778 [2024-07-10 15:27:32.071097] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25e1970 00:03:52.778 [2024-07-10 15:27:32.071112] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:52.778 [2024-07-10 15:27:32.072449] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:52.778 [2024-07-10 15:27:32.072491] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:52.778 Passthru0 00:03:52.778 15:27:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.778 15:27:32 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:52.778 15:27:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.778 15:27:32 -- common/autotest_common.sh@10 -- # set +x 00:03:52.778 15:27:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.778 15:27:32 -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:52.778 { 00:03:52.778 "name": "Malloc2", 00:03:52.778 "aliases": [ 00:03:52.778 "fc8a62f4-9552-4297-9cab-65e334aabd5c" 00:03:52.778 ], 00:03:52.778 "product_name": "Malloc disk", 00:03:52.778 "block_size": 512, 00:03:52.778 "num_blocks": 16384, 00:03:52.778 "uuid": "fc8a62f4-9552-4297-9cab-65e334aabd5c", 00:03:52.778 "assigned_rate_limits": { 00:03:52.778 "rw_ios_per_sec": 0, 00:03:52.778 "rw_mbytes_per_sec": 0, 00:03:52.778 "r_mbytes_per_sec": 0, 00:03:52.778 "w_mbytes_per_sec": 0 00:03:52.778 }, 00:03:52.778 "claimed": true, 00:03:52.778 "claim_type": "exclusive_write", 00:03:52.778 "zoned": false, 00:03:52.778 "supported_io_types": { 00:03:52.778 "read": true, 00:03:52.778 "write": true, 00:03:52.778 "unmap": true, 00:03:52.778 "write_zeroes": true, 00:03:52.778 "flush": true, 00:03:52.778 "reset": true, 00:03:52.778 "compare": false, 00:03:52.778 "compare_and_write": false, 00:03:52.778 "abort": true, 00:03:52.778 "nvme_admin": false, 00:03:52.778 "nvme_io": false 00:03:52.778 }, 00:03:52.778 "memory_domains": [ 00:03:52.778 { 00:03:52.778 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:52.778 "dma_device_type": 2 00:03:52.778 } 00:03:52.778 ], 00:03:52.778 "driver_specific": {} 00:03:52.778 }, 00:03:52.778 { 00:03:52.778 "name": "Passthru0", 00:03:52.779 "aliases": [ 00:03:52.779 "9a3f7a5a-38a3-5181-b080-f7300d87a73a" 00:03:52.779 ], 00:03:52.779 "product_name": "passthru", 00:03:52.779 "block_size": 512, 00:03:52.779 "num_blocks": 16384, 00:03:52.779 "uuid": "9a3f7a5a-38a3-5181-b080-f7300d87a73a", 00:03:52.779 "assigned_rate_limits": { 00:03:52.779 "rw_ios_per_sec": 0, 00:03:52.779 "rw_mbytes_per_sec": 0, 00:03:52.779 "r_mbytes_per_sec": 0, 00:03:52.779 "w_mbytes_per_sec": 0 00:03:52.779 }, 00:03:52.779 "claimed": false, 00:03:52.779 "zoned": false, 00:03:52.779 "supported_io_types": { 00:03:52.779 "read": true, 00:03:52.779 "write": true, 00:03:52.779 "unmap": true, 00:03:52.779 "write_zeroes": true, 00:03:52.779 "flush": true, 00:03:52.779 "reset": true, 00:03:52.779 "compare": false, 00:03:52.779 "compare_and_write": false, 00:03:52.779 "abort": true, 00:03:52.779 "nvme_admin": false, 00:03:52.779 "nvme_io": false 00:03:52.779 }, 00:03:52.779 "memory_domains": [ 00:03:52.779 { 00:03:52.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:52.779 "dma_device_type": 2 00:03:52.779 } 00:03:52.779 ], 00:03:52.779 "driver_specific": { 00:03:52.779 "passthru": { 00:03:52.779 "name": "Passthru0", 00:03:52.779 "base_bdev_name": "Malloc2" 00:03:52.779 } 00:03:52.779 } 00:03:52.779 } 00:03:52.779 ]' 00:03:52.779 15:27:32 -- rpc/rpc.sh@21 -- # jq length 00:03:52.779 15:27:32 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:52.779 15:27:32 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:52.779 15:27:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.779 15:27:32 -- common/autotest_common.sh@10 -- # set +x 00:03:52.779 15:27:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.779 15:27:32 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:52.779 15:27:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.779 15:27:32 -- common/autotest_common.sh@10 -- # set +x 00:03:52.779 15:27:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.779 15:27:32 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:52.779 15:27:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:03:52.779 15:27:32 -- common/autotest_common.sh@10 -- # set +x 00:03:52.779 15:27:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:03:52.779 15:27:32 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:52.779 15:27:32 -- rpc/rpc.sh@26 -- # jq length 00:03:53.037 15:27:32 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:53.037 00:03:53.037 real 0m0.220s 00:03:53.037 user 0m0.150s 00:03:53.037 sys 0m0.018s 00:03:53.037 15:27:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.037 15:27:32 -- common/autotest_common.sh@10 -- # set +x 00:03:53.037 ************************************ 00:03:53.037 END TEST rpc_daemon_integrity 00:03:53.037 ************************************ 00:03:53.037 15:27:32 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:53.037 15:27:32 -- rpc/rpc.sh@84 -- # killprocess 1990973 00:03:53.037 15:27:32 -- common/autotest_common.sh@926 -- # '[' -z 1990973 ']' 00:03:53.037 15:27:32 -- common/autotest_common.sh@930 -- # kill -0 1990973 00:03:53.037 15:27:32 -- common/autotest_common.sh@931 -- # uname 00:03:53.038 15:27:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:03:53.038 15:27:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1990973 00:03:53.038 15:27:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:03:53.038 15:27:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:03:53.038 15:27:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1990973' 00:03:53.038 killing process with pid 1990973 00:03:53.038 15:27:32 -- common/autotest_common.sh@945 -- # kill 1990973 00:03:53.038 15:27:32 -- common/autotest_common.sh@950 -- # wait 1990973 00:03:53.604 00:03:53.604 real 0m2.363s 00:03:53.604 user 0m3.005s 00:03:53.604 sys 0m0.557s 00:03:53.604 15:27:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.604 15:27:32 -- common/autotest_common.sh@10 -- # set +x 00:03:53.604 ************************************ 00:03:53.604 END TEST rpc 00:03:53.604 ************************************ 00:03:53.604 15:27:32 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:53.604 15:27:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:53.604 15:27:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:53.604 15:27:32 -- common/autotest_common.sh@10 -- # set +x 00:03:53.604 ************************************ 00:03:53.604 START TEST rpc_client 00:03:53.604 ************************************ 00:03:53.604 15:27:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:53.604 * Looking for test storage... 00:03:53.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:53.604 15:27:32 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:53.604 OK 00:03:53.604 15:27:32 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:53.604 00:03:53.604 real 0m0.067s 00:03:53.604 user 0m0.024s 00:03:53.604 sys 0m0.049s 00:03:53.604 15:27:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:53.604 15:27:32 -- common/autotest_common.sh@10 -- # set +x 00:03:53.604 ************************************ 00:03:53.604 END TEST rpc_client 00:03:53.604 ************************************ 00:03:53.604 15:27:32 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:53.604 15:27:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:53.604 15:27:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:53.604 15:27:32 -- common/autotest_common.sh@10 -- # set +x 00:03:53.604 ************************************ 00:03:53.604 START TEST json_config 00:03:53.604 ************************************ 00:03:53.604 15:27:32 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:53.604 15:27:32 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:53.604 15:27:32 -- nvmf/common.sh@7 -- # uname -s 00:03:53.604 15:27:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:53.604 15:27:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:53.604 15:27:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:53.604 15:27:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:53.604 15:27:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:53.604 15:27:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:53.604 15:27:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:53.604 15:27:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:53.604 15:27:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:53.604 15:27:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:53.604 15:27:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:53.604 15:27:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:53.604 15:27:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:53.604 15:27:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:53.604 15:27:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:53.604 15:27:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:53.604 15:27:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:53.604 15:27:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:53.604 15:27:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:53.604 15:27:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.604 15:27:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.604 15:27:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.604 15:27:32 -- paths/export.sh@5 -- # export PATH 00:03:53.605 15:27:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:53.605 15:27:32 -- nvmf/common.sh@46 -- # : 0 00:03:53.605 15:27:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:53.605 15:27:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:53.605 15:27:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:53.605 15:27:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:53.605 15:27:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:53.605 15:27:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:53.605 15:27:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:53.605 15:27:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:53.605 15:27:32 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:03:53.605 15:27:32 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:03:53.605 15:27:32 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:03:53.605 15:27:32 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:53.605 15:27:32 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:03:53.605 15:27:32 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:03:53.605 15:27:32 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:53.605 15:27:32 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:03:53.605 15:27:32 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:53.605 15:27:32 -- json_config/json_config.sh@32 -- # declare -A app_params 00:03:53.605 15:27:32 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:53.605 15:27:32 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:03:53.605 15:27:32 -- json_config/json_config.sh@43 -- # last_event_id=0 00:03:53.605 15:27:32 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:53.605 15:27:32 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:03:53.605 INFO: JSON configuration test init 00:03:53.605 15:27:32 -- json_config/json_config.sh@420 -- # json_config_test_init 00:03:53.605 15:27:32 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:03:53.605 15:27:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:53.605 15:27:32 -- common/autotest_common.sh@10 -- # set +x 00:03:53.605 15:27:32 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:03:53.605 15:27:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:53.605 15:27:32 -- common/autotest_common.sh@10 -- # set +x 00:03:53.605 15:27:32 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:03:53.605 15:27:32 -- json_config/json_config.sh@98 -- # local app=target 00:03:53.605 15:27:32 -- json_config/json_config.sh@99 -- # shift 00:03:53.605 15:27:32 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:03:53.605 15:27:32 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:03:53.605 15:27:32 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:03:53.605 15:27:32 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:53.605 15:27:32 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:03:53.605 15:27:32 -- json_config/json_config.sh@111 -- # app_pid[$app]=1991455 00:03:53.605 15:27:32 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:53.605 15:27:32 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:03:53.605 Waiting for target to run... 00:03:53.605 15:27:32 -- json_config/json_config.sh@114 -- # waitforlisten 1991455 /var/tmp/spdk_tgt.sock 00:03:53.605 15:27:32 -- common/autotest_common.sh@819 -- # '[' -z 1991455 ']' 00:03:53.605 15:27:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:53.605 15:27:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:03:53.605 15:27:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:53.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:53.605 15:27:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:03:53.605 15:27:32 -- common/autotest_common.sh@10 -- # set +x 00:03:53.605 [2024-07-10 15:27:32.897206] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:03:53.605 [2024-07-10 15:27:32.897294] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1991455 ] 00:03:53.605 EAL: No free 2048 kB hugepages reported on node 1 00:03:54.171 [2024-07-10 15:27:33.396813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.171 [2024-07-10 15:27:33.499455] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:03:54.171 [2024-07-10 15:27:33.499655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.736 15:27:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:03:54.736 15:27:33 -- common/autotest_common.sh@852 -- # return 0 00:03:54.736 15:27:33 -- json_config/json_config.sh@115 -- # echo '' 00:03:54.736 00:03:54.736 15:27:33 -- json_config/json_config.sh@322 -- # create_accel_config 00:03:54.736 15:27:33 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:03:54.736 15:27:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:54.736 15:27:33 -- common/autotest_common.sh@10 -- # set +x 00:03:54.736 15:27:33 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:03:54.736 15:27:33 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:03:54.736 15:27:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:54.736 15:27:33 -- common/autotest_common.sh@10 -- # set +x 00:03:54.736 15:27:33 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:54.736 15:27:33 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:03:54.736 15:27:33 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:58.084 15:27:37 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:03:58.084 15:27:37 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:03:58.084 15:27:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:58.084 15:27:37 -- common/autotest_common.sh@10 -- # set +x 00:03:58.084 15:27:37 -- json_config/json_config.sh@48 -- # local ret=0 00:03:58.084 15:27:37 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:58.084 15:27:37 -- json_config/json_config.sh@49 -- # local enabled_types 00:03:58.084 15:27:37 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:58.084 15:27:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:58.084 15:27:37 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:58.084 15:27:37 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:03:58.084 15:27:37 -- json_config/json_config.sh@51 -- # local get_types 00:03:58.084 15:27:37 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:03:58.084 15:27:37 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:03:58.084 15:27:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:58.084 15:27:37 -- common/autotest_common.sh@10 -- # set +x 00:03:58.084 15:27:37 -- json_config/json_config.sh@58 -- # return 0 00:03:58.084 15:27:37 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:03:58.084 15:27:37 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:03:58.084 15:27:37 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:03:58.084 15:27:37 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:03:58.084 15:27:37 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:03:58.084 15:27:37 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:03:58.084 15:27:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:03:58.084 15:27:37 -- common/autotest_common.sh@10 -- # set +x 00:03:58.084 15:27:37 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:58.084 15:27:37 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:03:58.084 15:27:37 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:03:58.084 15:27:37 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:58.084 15:27:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:58.341 MallocForNvmf0 00:03:58.341 15:27:37 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:58.341 15:27:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:58.599 MallocForNvmf1 00:03:58.599 15:27:37 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:58.599 15:27:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:58.599 [2024-07-10 15:27:37.954115] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:58.599 15:27:37 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:58.599 15:27:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:58.857 15:27:38 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:58.857 15:27:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:59.115 15:27:38 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:59.115 15:27:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:59.372 15:27:38 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:59.372 15:27:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:59.630 [2024-07-10 15:27:38.873125] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:59.630 15:27:38 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:03:59.630 15:27:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:59.630 15:27:38 -- common/autotest_common.sh@10 -- # set +x 00:03:59.630 15:27:38 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:03:59.630 15:27:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:59.630 15:27:38 -- common/autotest_common.sh@10 -- # set +x 00:03:59.630 15:27:38 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:03:59.630 15:27:38 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:59.630 15:27:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:59.887 MallocBdevForConfigChangeCheck 00:03:59.887 15:27:39 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:03:59.887 15:27:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:03:59.887 15:27:39 -- common/autotest_common.sh@10 -- # set +x 00:03:59.887 15:27:39 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:03:59.888 15:27:39 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:00.452 15:27:39 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:04:00.453 INFO: shutting down applications... 00:04:00.453 15:27:39 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:04:00.453 15:27:39 -- json_config/json_config.sh@431 -- # json_config_clear target 00:04:00.453 15:27:39 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:04:00.453 15:27:39 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:01.826 Calling clear_iscsi_subsystem 00:04:01.826 Calling clear_nvmf_subsystem 00:04:01.826 Calling clear_nbd_subsystem 00:04:01.826 Calling clear_ublk_subsystem 00:04:01.826 Calling clear_vhost_blk_subsystem 00:04:01.826 Calling clear_vhost_scsi_subsystem 00:04:01.826 Calling clear_scheduler_subsystem 00:04:01.826 Calling clear_bdev_subsystem 00:04:01.826 Calling clear_accel_subsystem 00:04:01.826 Calling clear_vmd_subsystem 00:04:01.826 Calling clear_sock_subsystem 00:04:01.826 Calling clear_iobuf_subsystem 00:04:01.826 15:27:41 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:01.826 15:27:41 -- json_config/json_config.sh@396 -- # count=100 00:04:01.826 15:27:41 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:04:01.826 15:27:41 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:01.826 15:27:41 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:01.826 15:27:41 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:02.392 15:27:41 -- json_config/json_config.sh@398 -- # break 00:04:02.392 15:27:41 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:04:02.392 15:27:41 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:04:02.392 15:27:41 -- json_config/json_config.sh@120 -- # local app=target 00:04:02.392 15:27:41 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:04:02.392 15:27:41 -- json_config/json_config.sh@124 -- # [[ -n 1991455 ]] 00:04:02.392 15:27:41 -- json_config/json_config.sh@127 -- # kill -SIGINT 1991455 00:04:02.392 15:27:41 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:04:02.392 15:27:41 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:02.392 15:27:41 -- json_config/json_config.sh@130 -- # kill -0 1991455 00:04:02.392 15:27:41 -- json_config/json_config.sh@134 -- # sleep 0.5 00:04:02.650 15:27:42 -- json_config/json_config.sh@129 -- # (( i++ )) 00:04:02.650 15:27:42 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:04:02.650 15:27:42 -- json_config/json_config.sh@130 -- # kill -0 1991455 00:04:02.650 15:27:42 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:04:02.651 15:27:42 -- json_config/json_config.sh@132 -- # break 00:04:02.651 15:27:42 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:04:02.651 15:27:42 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:04:02.651 SPDK target shutdown done 00:04:02.651 15:27:42 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:04:02.651 INFO: relaunching applications... 00:04:02.651 15:27:42 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:02.651 15:27:42 -- json_config/json_config.sh@98 -- # local app=target 00:04:02.651 15:27:42 -- json_config/json_config.sh@99 -- # shift 00:04:02.651 15:27:42 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:04:02.651 15:27:42 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:04:02.651 15:27:42 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:04:02.651 15:27:42 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:02.651 15:27:42 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:04:02.651 15:27:42 -- json_config/json_config.sh@111 -- # app_pid[$app]=1992678 00:04:02.651 15:27:42 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:02.651 15:27:42 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:04:02.651 Waiting for target to run... 00:04:02.651 15:27:42 -- json_config/json_config.sh@114 -- # waitforlisten 1992678 /var/tmp/spdk_tgt.sock 00:04:02.651 15:27:42 -- common/autotest_common.sh@819 -- # '[' -z 1992678 ']' 00:04:02.651 15:27:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:02.651 15:27:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:02.651 15:27:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:02.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:02.651 15:27:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:02.651 15:27:42 -- common/autotest_common.sh@10 -- # set +x 00:04:02.908 [2024-07-10 15:27:42.067362] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:02.908 [2024-07-10 15:27:42.067504] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1992678 ] 00:04:02.908 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.475 [2024-07-10 15:27:42.575610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:03.475 [2024-07-10 15:27:42.677079] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:03.475 [2024-07-10 15:27:42.677268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.767 [2024-07-10 15:27:45.715321] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:06.767 [2024-07-10 15:27:45.747816] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:06.767 15:27:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:06.767 15:27:45 -- common/autotest_common.sh@852 -- # return 0 00:04:06.767 15:27:45 -- json_config/json_config.sh@115 -- # echo '' 00:04:06.767 00:04:06.767 15:27:45 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:04:06.767 15:27:45 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:06.767 INFO: Checking if target configuration is the same... 00:04:06.767 15:27:45 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:06.767 15:27:45 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:04:06.767 15:27:45 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:06.767 + '[' 2 -ne 2 ']' 00:04:06.767 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:06.767 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:06.767 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:06.767 +++ basename /dev/fd/62 00:04:06.767 ++ mktemp /tmp/62.XXX 00:04:06.767 + tmp_file_1=/tmp/62.QkC 00:04:06.767 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:06.767 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:06.767 + tmp_file_2=/tmp/spdk_tgt_config.json.hbh 00:04:06.767 + ret=0 00:04:06.767 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:07.024 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:07.024 + diff -u /tmp/62.QkC /tmp/spdk_tgt_config.json.hbh 00:04:07.024 + echo 'INFO: JSON config files are the same' 00:04:07.024 INFO: JSON config files are the same 00:04:07.024 + rm /tmp/62.QkC /tmp/spdk_tgt_config.json.hbh 00:04:07.024 + exit 0 00:04:07.024 15:27:46 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:04:07.024 15:27:46 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:07.024 INFO: changing configuration and checking if this can be detected... 00:04:07.024 15:27:46 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:07.024 15:27:46 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:07.281 15:27:46 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:07.281 15:27:46 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:04:07.281 15:27:46 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:07.281 + '[' 2 -ne 2 ']' 00:04:07.281 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:07.281 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:07.281 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:07.281 +++ basename /dev/fd/62 00:04:07.281 ++ mktemp /tmp/62.XXX 00:04:07.281 + tmp_file_1=/tmp/62.Ghu 00:04:07.281 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:07.281 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:07.281 + tmp_file_2=/tmp/spdk_tgt_config.json.KJ2 00:04:07.281 + ret=0 00:04:07.281 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:07.538 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:07.796 + diff -u /tmp/62.Ghu /tmp/spdk_tgt_config.json.KJ2 00:04:07.796 + ret=1 00:04:07.796 + echo '=== Start of file: /tmp/62.Ghu ===' 00:04:07.796 + cat /tmp/62.Ghu 00:04:07.796 + echo '=== End of file: /tmp/62.Ghu ===' 00:04:07.796 + echo '' 00:04:07.796 + echo '=== Start of file: /tmp/spdk_tgt_config.json.KJ2 ===' 00:04:07.796 + cat /tmp/spdk_tgt_config.json.KJ2 00:04:07.796 + echo '=== End of file: /tmp/spdk_tgt_config.json.KJ2 ===' 00:04:07.796 + echo '' 00:04:07.796 + rm /tmp/62.Ghu /tmp/spdk_tgt_config.json.KJ2 00:04:07.796 + exit 1 00:04:07.796 15:27:46 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:04:07.796 INFO: configuration change detected. 00:04:07.796 15:27:46 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:04:07.796 15:27:46 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:04:07.796 15:27:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:07.796 15:27:46 -- common/autotest_common.sh@10 -- # set +x 00:04:07.796 15:27:46 -- json_config/json_config.sh@360 -- # local ret=0 00:04:07.796 15:27:46 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:04:07.796 15:27:46 -- json_config/json_config.sh@370 -- # [[ -n 1992678 ]] 00:04:07.796 15:27:46 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:04:07.796 15:27:46 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:04:07.796 15:27:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:07.796 15:27:46 -- common/autotest_common.sh@10 -- # set +x 00:04:07.796 15:27:46 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:04:07.796 15:27:46 -- json_config/json_config.sh@246 -- # uname -s 00:04:07.796 15:27:46 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:04:07.796 15:27:46 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:04:07.796 15:27:46 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:04:07.796 15:27:46 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:04:07.796 15:27:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:07.796 15:27:46 -- common/autotest_common.sh@10 -- # set +x 00:04:07.796 15:27:46 -- json_config/json_config.sh@376 -- # killprocess 1992678 00:04:07.796 15:27:46 -- common/autotest_common.sh@926 -- # '[' -z 1992678 ']' 00:04:07.796 15:27:46 -- common/autotest_common.sh@930 -- # kill -0 1992678 00:04:07.796 15:27:46 -- common/autotest_common.sh@931 -- # uname 00:04:07.796 15:27:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:07.796 15:27:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1992678 00:04:07.796 15:27:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:07.796 15:27:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:07.796 15:27:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1992678' 00:04:07.796 killing process with pid 1992678 00:04:07.796 15:27:47 -- common/autotest_common.sh@945 -- # kill 1992678 00:04:07.796 15:27:47 -- common/autotest_common.sh@950 -- # wait 1992678 00:04:09.693 15:27:48 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:09.693 15:27:48 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:04:09.693 15:27:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:09.693 15:27:48 -- common/autotest_common.sh@10 -- # set +x 00:04:09.693 15:27:48 -- json_config/json_config.sh@381 -- # return 0 00:04:09.693 15:27:48 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:04:09.693 INFO: Success 00:04:09.693 00:04:09.693 real 0m15.889s 00:04:09.693 user 0m17.920s 00:04:09.693 sys 0m2.191s 00:04:09.693 15:27:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.693 15:27:48 -- common/autotest_common.sh@10 -- # set +x 00:04:09.693 ************************************ 00:04:09.693 END TEST json_config 00:04:09.693 ************************************ 00:04:09.693 15:27:48 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:09.693 15:27:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:09.693 15:27:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:09.693 15:27:48 -- common/autotest_common.sh@10 -- # set +x 00:04:09.693 ************************************ 00:04:09.693 START TEST json_config_extra_key 00:04:09.693 ************************************ 00:04:09.693 15:27:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:09.693 15:27:48 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:09.693 15:27:48 -- nvmf/common.sh@7 -- # uname -s 00:04:09.693 15:27:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:09.693 15:27:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:09.693 15:27:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:09.693 15:27:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:09.693 15:27:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:09.693 15:27:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:09.693 15:27:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:09.693 15:27:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:09.693 15:27:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:09.693 15:27:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:09.693 15:27:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:09.693 15:27:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:09.693 15:27:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:09.693 15:27:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:09.693 15:27:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:09.693 15:27:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:09.693 15:27:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:09.693 15:27:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:09.693 15:27:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:09.693 15:27:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.693 15:27:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.693 15:27:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.693 15:27:48 -- paths/export.sh@5 -- # export PATH 00:04:09.693 15:27:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:09.693 15:27:48 -- nvmf/common.sh@46 -- # : 0 00:04:09.693 15:27:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:09.693 15:27:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:09.693 15:27:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:09.693 15:27:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:09.693 15:27:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:09.693 15:27:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:09.693 15:27:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:09.693 15:27:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:09.693 15:27:48 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:04:09.693 15:27:48 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:04:09.693 15:27:48 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:09.693 15:27:48 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:04:09.693 15:27:48 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:09.693 15:27:48 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:04:09.693 15:27:48 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:09.693 15:27:48 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:04:09.693 15:27:48 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:09.693 15:27:48 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:04:09.693 INFO: launching applications... 00:04:09.693 15:27:48 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:09.693 15:27:48 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:04:09.693 15:27:48 -- json_config/json_config_extra_key.sh@25 -- # shift 00:04:09.693 15:27:48 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:04:09.693 15:27:48 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:04:09.693 15:27:48 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=1993613 00:04:09.693 15:27:48 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:09.693 15:27:48 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:04:09.693 Waiting for target to run... 00:04:09.693 15:27:48 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 1993613 /var/tmp/spdk_tgt.sock 00:04:09.693 15:27:48 -- common/autotest_common.sh@819 -- # '[' -z 1993613 ']' 00:04:09.693 15:27:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:09.693 15:27:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:09.693 15:27:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:09.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:09.693 15:27:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:09.693 15:27:48 -- common/autotest_common.sh@10 -- # set +x 00:04:09.693 [2024-07-10 15:27:48.806933] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:09.693 [2024-07-10 15:27:48.807012] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1993613 ] 00:04:09.693 EAL: No free 2048 kB hugepages reported on node 1 00:04:09.951 [2024-07-10 15:27:49.149506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:09.951 [2024-07-10 15:27:49.235939] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:09.951 [2024-07-10 15:27:49.236137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:10.517 15:27:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:10.517 15:27:49 -- common/autotest_common.sh@852 -- # return 0 00:04:10.517 15:27:49 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:04:10.517 00:04:10.517 15:27:49 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:04:10.517 INFO: shutting down applications... 00:04:10.517 15:27:49 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:04:10.517 15:27:49 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:04:10.517 15:27:49 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:04:10.517 15:27:49 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 1993613 ]] 00:04:10.517 15:27:49 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 1993613 00:04:10.517 15:27:49 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:04:10.517 15:27:49 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:10.517 15:27:49 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1993613 00:04:10.517 15:27:49 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:04:11.084 15:27:50 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:04:11.084 15:27:50 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:04:11.084 15:27:50 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1993613 00:04:11.084 15:27:50 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:04:11.084 15:27:50 -- json_config/json_config_extra_key.sh@52 -- # break 00:04:11.084 15:27:50 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:04:11.084 15:27:50 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:04:11.084 SPDK target shutdown done 00:04:11.084 15:27:50 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:04:11.084 Success 00:04:11.084 00:04:11.084 real 0m1.523s 00:04:11.084 user 0m1.505s 00:04:11.084 sys 0m0.444s 00:04:11.084 15:27:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.084 15:27:50 -- common/autotest_common.sh@10 -- # set +x 00:04:11.084 ************************************ 00:04:11.084 END TEST json_config_extra_key 00:04:11.084 ************************************ 00:04:11.084 15:27:50 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:11.084 15:27:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.084 15:27:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.084 15:27:50 -- common/autotest_common.sh@10 -- # set +x 00:04:11.084 ************************************ 00:04:11.084 START TEST alias_rpc 00:04:11.084 ************************************ 00:04:11.084 15:27:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:11.084 * Looking for test storage... 00:04:11.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:11.084 15:27:50 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:11.084 15:27:50 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1993924 00:04:11.084 15:27:50 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:11.084 15:27:50 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1993924 00:04:11.084 15:27:50 -- common/autotest_common.sh@819 -- # '[' -z 1993924 ']' 00:04:11.084 15:27:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:11.084 15:27:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:11.084 15:27:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:11.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:11.084 15:27:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:11.084 15:27:50 -- common/autotest_common.sh@10 -- # set +x 00:04:11.084 [2024-07-10 15:27:50.353533] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:11.084 [2024-07-10 15:27:50.353631] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1993924 ] 00:04:11.084 EAL: No free 2048 kB hugepages reported on node 1 00:04:11.084 [2024-07-10 15:27:50.410313] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.343 [2024-07-10 15:27:50.514709] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:11.343 [2024-07-10 15:27:50.514902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.910 15:27:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:11.910 15:27:51 -- common/autotest_common.sh@852 -- # return 0 00:04:11.910 15:27:51 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:12.169 15:27:51 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1993924 00:04:12.169 15:27:51 -- common/autotest_common.sh@926 -- # '[' -z 1993924 ']' 00:04:12.169 15:27:51 -- common/autotest_common.sh@930 -- # kill -0 1993924 00:04:12.169 15:27:51 -- common/autotest_common.sh@931 -- # uname 00:04:12.169 15:27:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:12.169 15:27:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1993924 00:04:12.427 15:27:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:12.427 15:27:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:12.427 15:27:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1993924' 00:04:12.427 killing process with pid 1993924 00:04:12.427 15:27:51 -- common/autotest_common.sh@945 -- # kill 1993924 00:04:12.427 15:27:51 -- common/autotest_common.sh@950 -- # wait 1993924 00:04:12.685 00:04:12.685 real 0m1.764s 00:04:12.685 user 0m2.009s 00:04:12.685 sys 0m0.452s 00:04:12.685 15:27:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.685 15:27:52 -- common/autotest_common.sh@10 -- # set +x 00:04:12.685 ************************************ 00:04:12.685 END TEST alias_rpc 00:04:12.685 ************************************ 00:04:12.685 15:27:52 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:04:12.685 15:27:52 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:12.686 15:27:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:12.686 15:27:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:12.686 15:27:52 -- common/autotest_common.sh@10 -- # set +x 00:04:12.686 ************************************ 00:04:12.686 START TEST spdkcli_tcp 00:04:12.686 ************************************ 00:04:12.686 15:27:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:12.944 * Looking for test storage... 00:04:12.944 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:12.944 15:27:52 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:12.944 15:27:52 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:12.944 15:27:52 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:12.944 15:27:52 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:12.944 15:27:52 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:12.944 15:27:52 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:12.944 15:27:52 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:12.944 15:27:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:12.944 15:27:52 -- common/autotest_common.sh@10 -- # set +x 00:04:12.944 15:27:52 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1994125 00:04:12.944 15:27:52 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:12.944 15:27:52 -- spdkcli/tcp.sh@27 -- # waitforlisten 1994125 00:04:12.944 15:27:52 -- common/autotest_common.sh@819 -- # '[' -z 1994125 ']' 00:04:12.944 15:27:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:12.944 15:27:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:12.944 15:27:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:12.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:12.944 15:27:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:12.944 15:27:52 -- common/autotest_common.sh@10 -- # set +x 00:04:12.944 [2024-07-10 15:27:52.154229] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:12.945 [2024-07-10 15:27:52.154304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1994125 ] 00:04:12.945 EAL: No free 2048 kB hugepages reported on node 1 00:04:12.945 [2024-07-10 15:27:52.213292] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:12.945 [2024-07-10 15:27:52.318988] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:12.945 [2024-07-10 15:27:52.319209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:12.945 [2024-07-10 15:27:52.319215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:13.878 15:27:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:13.878 15:27:53 -- common/autotest_common.sh@852 -- # return 0 00:04:13.878 15:27:53 -- spdkcli/tcp.sh@31 -- # socat_pid=1994265 00:04:13.878 15:27:53 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:13.878 15:27:53 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:14.136 [ 00:04:14.136 "bdev_malloc_delete", 00:04:14.136 "bdev_malloc_create", 00:04:14.136 "bdev_null_resize", 00:04:14.136 "bdev_null_delete", 00:04:14.136 "bdev_null_create", 00:04:14.136 "bdev_nvme_cuse_unregister", 00:04:14.136 "bdev_nvme_cuse_register", 00:04:14.136 "bdev_opal_new_user", 00:04:14.136 "bdev_opal_set_lock_state", 00:04:14.136 "bdev_opal_delete", 00:04:14.136 "bdev_opal_get_info", 00:04:14.136 "bdev_opal_create", 00:04:14.136 "bdev_nvme_opal_revert", 00:04:14.136 "bdev_nvme_opal_init", 00:04:14.136 "bdev_nvme_send_cmd", 00:04:14.136 "bdev_nvme_get_path_iostat", 00:04:14.136 "bdev_nvme_get_mdns_discovery_info", 00:04:14.136 "bdev_nvme_stop_mdns_discovery", 00:04:14.136 "bdev_nvme_start_mdns_discovery", 00:04:14.136 "bdev_nvme_set_multipath_policy", 00:04:14.136 "bdev_nvme_set_preferred_path", 00:04:14.136 "bdev_nvme_get_io_paths", 00:04:14.136 "bdev_nvme_remove_error_injection", 00:04:14.136 "bdev_nvme_add_error_injection", 00:04:14.136 "bdev_nvme_get_discovery_info", 00:04:14.136 "bdev_nvme_stop_discovery", 00:04:14.136 "bdev_nvme_start_discovery", 00:04:14.136 "bdev_nvme_get_controller_health_info", 00:04:14.136 "bdev_nvme_disable_controller", 00:04:14.136 "bdev_nvme_enable_controller", 00:04:14.136 "bdev_nvme_reset_controller", 00:04:14.136 "bdev_nvme_get_transport_statistics", 00:04:14.136 "bdev_nvme_apply_firmware", 00:04:14.136 "bdev_nvme_detach_controller", 00:04:14.136 "bdev_nvme_get_controllers", 00:04:14.136 "bdev_nvme_attach_controller", 00:04:14.136 "bdev_nvme_set_hotplug", 00:04:14.136 "bdev_nvme_set_options", 00:04:14.136 "bdev_passthru_delete", 00:04:14.136 "bdev_passthru_create", 00:04:14.136 "bdev_lvol_grow_lvstore", 00:04:14.136 "bdev_lvol_get_lvols", 00:04:14.136 "bdev_lvol_get_lvstores", 00:04:14.136 "bdev_lvol_delete", 00:04:14.136 "bdev_lvol_set_read_only", 00:04:14.136 "bdev_lvol_resize", 00:04:14.136 "bdev_lvol_decouple_parent", 00:04:14.136 "bdev_lvol_inflate", 00:04:14.136 "bdev_lvol_rename", 00:04:14.136 "bdev_lvol_clone_bdev", 00:04:14.136 "bdev_lvol_clone", 00:04:14.136 "bdev_lvol_snapshot", 00:04:14.136 "bdev_lvol_create", 00:04:14.136 "bdev_lvol_delete_lvstore", 00:04:14.136 "bdev_lvol_rename_lvstore", 00:04:14.136 "bdev_lvol_create_lvstore", 00:04:14.136 "bdev_raid_set_options", 00:04:14.136 "bdev_raid_remove_base_bdev", 00:04:14.136 "bdev_raid_add_base_bdev", 00:04:14.136 "bdev_raid_delete", 00:04:14.136 "bdev_raid_create", 00:04:14.136 "bdev_raid_get_bdevs", 00:04:14.136 "bdev_error_inject_error", 00:04:14.136 "bdev_error_delete", 00:04:14.136 "bdev_error_create", 00:04:14.136 "bdev_split_delete", 00:04:14.136 "bdev_split_create", 00:04:14.136 "bdev_delay_delete", 00:04:14.136 "bdev_delay_create", 00:04:14.136 "bdev_delay_update_latency", 00:04:14.136 "bdev_zone_block_delete", 00:04:14.136 "bdev_zone_block_create", 00:04:14.136 "blobfs_create", 00:04:14.136 "blobfs_detect", 00:04:14.136 "blobfs_set_cache_size", 00:04:14.136 "bdev_aio_delete", 00:04:14.136 "bdev_aio_rescan", 00:04:14.136 "bdev_aio_create", 00:04:14.136 "bdev_ftl_set_property", 00:04:14.136 "bdev_ftl_get_properties", 00:04:14.136 "bdev_ftl_get_stats", 00:04:14.136 "bdev_ftl_unmap", 00:04:14.136 "bdev_ftl_unload", 00:04:14.136 "bdev_ftl_delete", 00:04:14.136 "bdev_ftl_load", 00:04:14.136 "bdev_ftl_create", 00:04:14.136 "bdev_virtio_attach_controller", 00:04:14.136 "bdev_virtio_scsi_get_devices", 00:04:14.136 "bdev_virtio_detach_controller", 00:04:14.136 "bdev_virtio_blk_set_hotplug", 00:04:14.136 "bdev_iscsi_delete", 00:04:14.136 "bdev_iscsi_create", 00:04:14.136 "bdev_iscsi_set_options", 00:04:14.136 "accel_error_inject_error", 00:04:14.136 "ioat_scan_accel_module", 00:04:14.136 "dsa_scan_accel_module", 00:04:14.136 "iaa_scan_accel_module", 00:04:14.136 "iscsi_set_options", 00:04:14.136 "iscsi_get_auth_groups", 00:04:14.136 "iscsi_auth_group_remove_secret", 00:04:14.136 "iscsi_auth_group_add_secret", 00:04:14.136 "iscsi_delete_auth_group", 00:04:14.136 "iscsi_create_auth_group", 00:04:14.136 "iscsi_set_discovery_auth", 00:04:14.136 "iscsi_get_options", 00:04:14.136 "iscsi_target_node_request_logout", 00:04:14.136 "iscsi_target_node_set_redirect", 00:04:14.136 "iscsi_target_node_set_auth", 00:04:14.136 "iscsi_target_node_add_lun", 00:04:14.136 "iscsi_get_connections", 00:04:14.136 "iscsi_portal_group_set_auth", 00:04:14.136 "iscsi_start_portal_group", 00:04:14.136 "iscsi_delete_portal_group", 00:04:14.136 "iscsi_create_portal_group", 00:04:14.136 "iscsi_get_portal_groups", 00:04:14.136 "iscsi_delete_target_node", 00:04:14.136 "iscsi_target_node_remove_pg_ig_maps", 00:04:14.136 "iscsi_target_node_add_pg_ig_maps", 00:04:14.136 "iscsi_create_target_node", 00:04:14.136 "iscsi_get_target_nodes", 00:04:14.136 "iscsi_delete_initiator_group", 00:04:14.136 "iscsi_initiator_group_remove_initiators", 00:04:14.136 "iscsi_initiator_group_add_initiators", 00:04:14.136 "iscsi_create_initiator_group", 00:04:14.136 "iscsi_get_initiator_groups", 00:04:14.136 "nvmf_set_crdt", 00:04:14.136 "nvmf_set_config", 00:04:14.136 "nvmf_set_max_subsystems", 00:04:14.136 "nvmf_subsystem_get_listeners", 00:04:14.136 "nvmf_subsystem_get_qpairs", 00:04:14.136 "nvmf_subsystem_get_controllers", 00:04:14.136 "nvmf_get_stats", 00:04:14.136 "nvmf_get_transports", 00:04:14.136 "nvmf_create_transport", 00:04:14.136 "nvmf_get_targets", 00:04:14.137 "nvmf_delete_target", 00:04:14.137 "nvmf_create_target", 00:04:14.137 "nvmf_subsystem_allow_any_host", 00:04:14.137 "nvmf_subsystem_remove_host", 00:04:14.137 "nvmf_subsystem_add_host", 00:04:14.137 "nvmf_subsystem_remove_ns", 00:04:14.137 "nvmf_subsystem_add_ns", 00:04:14.137 "nvmf_subsystem_listener_set_ana_state", 00:04:14.137 "nvmf_discovery_get_referrals", 00:04:14.137 "nvmf_discovery_remove_referral", 00:04:14.137 "nvmf_discovery_add_referral", 00:04:14.137 "nvmf_subsystem_remove_listener", 00:04:14.137 "nvmf_subsystem_add_listener", 00:04:14.137 "nvmf_delete_subsystem", 00:04:14.137 "nvmf_create_subsystem", 00:04:14.137 "nvmf_get_subsystems", 00:04:14.137 "env_dpdk_get_mem_stats", 00:04:14.137 "nbd_get_disks", 00:04:14.137 "nbd_stop_disk", 00:04:14.137 "nbd_start_disk", 00:04:14.137 "ublk_recover_disk", 00:04:14.137 "ublk_get_disks", 00:04:14.137 "ublk_stop_disk", 00:04:14.137 "ublk_start_disk", 00:04:14.137 "ublk_destroy_target", 00:04:14.137 "ublk_create_target", 00:04:14.137 "virtio_blk_create_transport", 00:04:14.137 "virtio_blk_get_transports", 00:04:14.137 "vhost_controller_set_coalescing", 00:04:14.137 "vhost_get_controllers", 00:04:14.137 "vhost_delete_controller", 00:04:14.137 "vhost_create_blk_controller", 00:04:14.137 "vhost_scsi_controller_remove_target", 00:04:14.137 "vhost_scsi_controller_add_target", 00:04:14.137 "vhost_start_scsi_controller", 00:04:14.137 "vhost_create_scsi_controller", 00:04:14.137 "thread_set_cpumask", 00:04:14.137 "framework_get_scheduler", 00:04:14.137 "framework_set_scheduler", 00:04:14.137 "framework_get_reactors", 00:04:14.137 "thread_get_io_channels", 00:04:14.137 "thread_get_pollers", 00:04:14.137 "thread_get_stats", 00:04:14.137 "framework_monitor_context_switch", 00:04:14.137 "spdk_kill_instance", 00:04:14.137 "log_enable_timestamps", 00:04:14.137 "log_get_flags", 00:04:14.137 "log_clear_flag", 00:04:14.137 "log_set_flag", 00:04:14.137 "log_get_level", 00:04:14.137 "log_set_level", 00:04:14.137 "log_get_print_level", 00:04:14.137 "log_set_print_level", 00:04:14.137 "framework_enable_cpumask_locks", 00:04:14.137 "framework_disable_cpumask_locks", 00:04:14.137 "framework_wait_init", 00:04:14.137 "framework_start_init", 00:04:14.137 "scsi_get_devices", 00:04:14.137 "bdev_get_histogram", 00:04:14.137 "bdev_enable_histogram", 00:04:14.137 "bdev_set_qos_limit", 00:04:14.137 "bdev_set_qd_sampling_period", 00:04:14.137 "bdev_get_bdevs", 00:04:14.137 "bdev_reset_iostat", 00:04:14.137 "bdev_get_iostat", 00:04:14.137 "bdev_examine", 00:04:14.137 "bdev_wait_for_examine", 00:04:14.137 "bdev_set_options", 00:04:14.137 "notify_get_notifications", 00:04:14.137 "notify_get_types", 00:04:14.137 "accel_get_stats", 00:04:14.137 "accel_set_options", 00:04:14.137 "accel_set_driver", 00:04:14.137 "accel_crypto_key_destroy", 00:04:14.137 "accel_crypto_keys_get", 00:04:14.137 "accel_crypto_key_create", 00:04:14.137 "accel_assign_opc", 00:04:14.137 "accel_get_module_info", 00:04:14.137 "accel_get_opc_assignments", 00:04:14.137 "vmd_rescan", 00:04:14.137 "vmd_remove_device", 00:04:14.137 "vmd_enable", 00:04:14.137 "sock_set_default_impl", 00:04:14.137 "sock_impl_set_options", 00:04:14.137 "sock_impl_get_options", 00:04:14.137 "iobuf_get_stats", 00:04:14.137 "iobuf_set_options", 00:04:14.137 "framework_get_pci_devices", 00:04:14.137 "framework_get_config", 00:04:14.137 "framework_get_subsystems", 00:04:14.137 "trace_get_info", 00:04:14.137 "trace_get_tpoint_group_mask", 00:04:14.137 "trace_disable_tpoint_group", 00:04:14.137 "trace_enable_tpoint_group", 00:04:14.137 "trace_clear_tpoint_mask", 00:04:14.137 "trace_set_tpoint_mask", 00:04:14.137 "spdk_get_version", 00:04:14.137 "rpc_get_methods" 00:04:14.137 ] 00:04:14.137 15:27:53 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:14.137 15:27:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:14.137 15:27:53 -- common/autotest_common.sh@10 -- # set +x 00:04:14.137 15:27:53 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:14.137 15:27:53 -- spdkcli/tcp.sh@38 -- # killprocess 1994125 00:04:14.137 15:27:53 -- common/autotest_common.sh@926 -- # '[' -z 1994125 ']' 00:04:14.137 15:27:53 -- common/autotest_common.sh@930 -- # kill -0 1994125 00:04:14.137 15:27:53 -- common/autotest_common.sh@931 -- # uname 00:04:14.137 15:27:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:14.137 15:27:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1994125 00:04:14.137 15:27:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:14.137 15:27:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:14.137 15:27:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1994125' 00:04:14.137 killing process with pid 1994125 00:04:14.137 15:27:53 -- common/autotest_common.sh@945 -- # kill 1994125 00:04:14.137 15:27:53 -- common/autotest_common.sh@950 -- # wait 1994125 00:04:14.704 00:04:14.704 real 0m1.811s 00:04:14.704 user 0m3.515s 00:04:14.704 sys 0m0.483s 00:04:14.704 15:27:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:14.704 15:27:53 -- common/autotest_common.sh@10 -- # set +x 00:04:14.704 ************************************ 00:04:14.704 END TEST spdkcli_tcp 00:04:14.704 ************************************ 00:04:14.704 15:27:53 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:14.704 15:27:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:14.704 15:27:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:14.704 15:27:53 -- common/autotest_common.sh@10 -- # set +x 00:04:14.704 ************************************ 00:04:14.704 START TEST dpdk_mem_utility 00:04:14.704 ************************************ 00:04:14.704 15:27:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:14.704 * Looking for test storage... 00:04:14.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:14.704 15:27:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:14.704 15:27:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1994460 00:04:14.704 15:27:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:14.704 15:27:53 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1994460 00:04:14.704 15:27:53 -- common/autotest_common.sh@819 -- # '[' -z 1994460 ']' 00:04:14.704 15:27:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.704 15:27:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:14.704 15:27:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.704 15:27:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:14.704 15:27:53 -- common/autotest_common.sh@10 -- # set +x 00:04:14.704 [2024-07-10 15:27:53.981577] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:14.704 [2024-07-10 15:27:53.981660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1994460 ] 00:04:14.704 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.704 [2024-07-10 15:27:54.038475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.962 [2024-07-10 15:27:54.142646] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:14.962 [2024-07-10 15:27:54.142813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.528 15:27:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:15.528 15:27:54 -- common/autotest_common.sh@852 -- # return 0 00:04:15.528 15:27:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:15.528 15:27:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:15.528 15:27:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:15.528 15:27:54 -- common/autotest_common.sh@10 -- # set +x 00:04:15.786 { 00:04:15.786 "filename": "/tmp/spdk_mem_dump.txt" 00:04:15.786 } 00:04:15.786 15:27:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:15.786 15:27:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:15.786 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:15.786 1 heaps totaling size 814.000000 MiB 00:04:15.786 size: 814.000000 MiB heap id: 0 00:04:15.786 end heaps---------- 00:04:15.786 8 mempools totaling size 598.116089 MiB 00:04:15.786 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:15.786 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:15.786 size: 84.521057 MiB name: bdev_io_1994460 00:04:15.786 size: 51.011292 MiB name: evtpool_1994460 00:04:15.786 size: 50.003479 MiB name: msgpool_1994460 00:04:15.786 size: 21.763794 MiB name: PDU_Pool 00:04:15.786 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:15.786 size: 0.026123 MiB name: Session_Pool 00:04:15.786 end mempools------- 00:04:15.786 6 memzones totaling size 4.142822 MiB 00:04:15.786 size: 1.000366 MiB name: RG_ring_0_1994460 00:04:15.786 size: 1.000366 MiB name: RG_ring_1_1994460 00:04:15.786 size: 1.000366 MiB name: RG_ring_4_1994460 00:04:15.786 size: 1.000366 MiB name: RG_ring_5_1994460 00:04:15.786 size: 0.125366 MiB name: RG_ring_2_1994460 00:04:15.786 size: 0.015991 MiB name: RG_ring_3_1994460 00:04:15.786 end memzones------- 00:04:15.786 15:27:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:15.786 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:15.786 list of free elements. size: 12.519348 MiB 00:04:15.786 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:15.786 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:15.786 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:15.786 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:15.786 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:15.786 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:15.786 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:15.786 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:15.786 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:15.786 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:15.786 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:15.786 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:15.786 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:15.786 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:15.786 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:15.786 list of standard malloc elements. size: 199.218079 MiB 00:04:15.786 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:15.786 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:15.786 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:15.786 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:15.786 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:15.786 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:15.786 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:15.786 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:15.787 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:15.787 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:15.787 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:15.787 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:15.787 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:15.787 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:15.787 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:15.787 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:15.787 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:15.787 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:15.787 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:15.787 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:15.787 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:15.787 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:15.787 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:15.787 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:15.787 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:15.787 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:15.787 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:15.787 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:15.787 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:15.787 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:15.787 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:15.787 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:15.787 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:15.787 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:15.787 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:15.787 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:15.787 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:15.787 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:15.787 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:15.787 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:15.787 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:15.787 list of memzone associated elements. size: 602.262573 MiB 00:04:15.787 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:15.787 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:15.787 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:15.787 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:15.787 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:15.787 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1994460_0 00:04:15.787 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:15.787 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1994460_0 00:04:15.787 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:15.787 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1994460_0 00:04:15.787 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:15.787 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:15.787 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:15.787 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:15.787 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:15.787 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1994460 00:04:15.787 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:15.787 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1994460 00:04:15.787 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:15.787 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1994460 00:04:15.787 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:15.787 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:15.787 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:15.787 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:15.787 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:15.787 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:15.787 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:15.787 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:15.787 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:15.787 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1994460 00:04:15.787 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:15.787 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1994460 00:04:15.787 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:15.787 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1994460 00:04:15.787 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:15.787 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1994460 00:04:15.787 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:15.787 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1994460 00:04:15.787 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:15.787 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:15.787 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:15.787 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:15.787 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:15.787 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:15.787 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:15.787 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1994460 00:04:15.787 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:15.787 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:15.787 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:15.787 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:15.787 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:15.787 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1994460 00:04:15.787 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:15.787 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:15.787 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:15.787 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1994460 00:04:15.787 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:15.787 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1994460 00:04:15.787 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:15.787 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:15.787 15:27:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:15.787 15:27:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1994460 00:04:15.787 15:27:55 -- common/autotest_common.sh@926 -- # '[' -z 1994460 ']' 00:04:15.787 15:27:55 -- common/autotest_common.sh@930 -- # kill -0 1994460 00:04:15.787 15:27:55 -- common/autotest_common.sh@931 -- # uname 00:04:15.787 15:27:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:15.787 15:27:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1994460 00:04:15.787 15:27:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:15.787 15:27:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:15.787 15:27:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1994460' 00:04:15.787 killing process with pid 1994460 00:04:15.787 15:27:55 -- common/autotest_common.sh@945 -- # kill 1994460 00:04:15.787 15:27:55 -- common/autotest_common.sh@950 -- # wait 1994460 00:04:16.352 00:04:16.352 real 0m1.613s 00:04:16.352 user 0m1.773s 00:04:16.352 sys 0m0.425s 00:04:16.352 15:27:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:16.352 15:27:55 -- common/autotest_common.sh@10 -- # set +x 00:04:16.352 ************************************ 00:04:16.352 END TEST dpdk_mem_utility 00:04:16.352 ************************************ 00:04:16.352 15:27:55 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:16.352 15:27:55 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:16.352 15:27:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:16.352 15:27:55 -- common/autotest_common.sh@10 -- # set +x 00:04:16.352 ************************************ 00:04:16.352 START TEST event 00:04:16.352 ************************************ 00:04:16.352 15:27:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:16.352 * Looking for test storage... 00:04:16.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:16.352 15:27:55 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:16.352 15:27:55 -- bdev/nbd_common.sh@6 -- # set -e 00:04:16.352 15:27:55 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:16.352 15:27:55 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:04:16.352 15:27:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:16.352 15:27:55 -- common/autotest_common.sh@10 -- # set +x 00:04:16.352 ************************************ 00:04:16.352 START TEST event_perf 00:04:16.352 ************************************ 00:04:16.352 15:27:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:16.352 Running I/O for 1 seconds...[2024-07-10 15:27:55.593400] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:16.353 [2024-07-10 15:27:55.593499] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1994659 ] 00:04:16.353 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.353 [2024-07-10 15:27:55.653312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:16.611 [2024-07-10 15:27:55.766496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:16.611 [2024-07-10 15:27:55.766551] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:16.611 [2024-07-10 15:27:55.766616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:16.611 [2024-07-10 15:27:55.766619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.544 Running I/O for 1 seconds... 00:04:17.544 lcore 0: 233428 00:04:17.544 lcore 1: 233429 00:04:17.544 lcore 2: 233432 00:04:17.544 lcore 3: 233436 00:04:17.544 done. 00:04:17.544 00:04:17.544 real 0m1.314s 00:04:17.544 user 0m4.227s 00:04:17.544 sys 0m0.081s 00:04:17.544 15:27:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.544 15:27:56 -- common/autotest_common.sh@10 -- # set +x 00:04:17.544 ************************************ 00:04:17.544 END TEST event_perf 00:04:17.544 ************************************ 00:04:17.544 15:27:56 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:17.544 15:27:56 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:17.544 15:27:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:17.544 15:27:56 -- common/autotest_common.sh@10 -- # set +x 00:04:17.803 ************************************ 00:04:17.803 START TEST event_reactor 00:04:17.803 ************************************ 00:04:17.803 15:27:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:17.803 [2024-07-10 15:27:56.935088] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:17.803 [2024-07-10 15:27:56.935179] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1994829 ] 00:04:17.803 EAL: No free 2048 kB hugepages reported on node 1 00:04:17.803 [2024-07-10 15:27:57.001301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.803 [2024-07-10 15:27:57.116225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.177 test_start 00:04:19.177 oneshot 00:04:19.177 tick 100 00:04:19.177 tick 100 00:04:19.177 tick 250 00:04:19.177 tick 100 00:04:19.177 tick 100 00:04:19.177 tick 100 00:04:19.177 tick 250 00:04:19.177 tick 500 00:04:19.177 tick 100 00:04:19.177 tick 100 00:04:19.177 tick 250 00:04:19.177 tick 100 00:04:19.177 tick 100 00:04:19.177 test_end 00:04:19.177 00:04:19.178 real 0m1.317s 00:04:19.178 user 0m1.221s 00:04:19.178 sys 0m0.091s 00:04:19.178 15:27:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:19.178 15:27:58 -- common/autotest_common.sh@10 -- # set +x 00:04:19.178 ************************************ 00:04:19.178 END TEST event_reactor 00:04:19.178 ************************************ 00:04:19.178 15:27:58 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:19.178 15:27:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:04:19.178 15:27:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:19.178 15:27:58 -- common/autotest_common.sh@10 -- # set +x 00:04:19.178 ************************************ 00:04:19.178 START TEST event_reactor_perf 00:04:19.178 ************************************ 00:04:19.178 15:27:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:19.178 [2024-07-10 15:27:58.276997] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:19.178 [2024-07-10 15:27:58.277090] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1995097 ] 00:04:19.178 EAL: No free 2048 kB hugepages reported on node 1 00:04:19.178 [2024-07-10 15:27:58.339418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:19.178 [2024-07-10 15:27:58.452183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.551 test_start 00:04:20.551 test_end 00:04:20.551 Performance: 352535 events per second 00:04:20.551 00:04:20.551 real 0m1.311s 00:04:20.551 user 0m1.232s 00:04:20.551 sys 0m0.072s 00:04:20.551 15:27:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.551 15:27:59 -- common/autotest_common.sh@10 -- # set +x 00:04:20.551 ************************************ 00:04:20.551 END TEST event_reactor_perf 00:04:20.551 ************************************ 00:04:20.551 15:27:59 -- event/event.sh@49 -- # uname -s 00:04:20.551 15:27:59 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:20.551 15:27:59 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:20.551 15:27:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:20.551 15:27:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:20.551 15:27:59 -- common/autotest_common.sh@10 -- # set +x 00:04:20.551 ************************************ 00:04:20.551 START TEST event_scheduler 00:04:20.551 ************************************ 00:04:20.551 15:27:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:20.551 * Looking for test storage... 00:04:20.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:20.551 15:27:59 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:20.551 15:27:59 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1995278 00:04:20.551 15:27:59 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:20.551 15:27:59 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:20.551 15:27:59 -- scheduler/scheduler.sh@37 -- # waitforlisten 1995278 00:04:20.551 15:27:59 -- common/autotest_common.sh@819 -- # '[' -z 1995278 ']' 00:04:20.551 15:27:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.551 15:27:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:20.551 15:27:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.551 15:27:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:20.551 15:27:59 -- common/autotest_common.sh@10 -- # set +x 00:04:20.551 [2024-07-10 15:27:59.696580] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:20.551 [2024-07-10 15:27:59.696661] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1995278 ] 00:04:20.551 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.551 [2024-07-10 15:27:59.756608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:20.551 [2024-07-10 15:27:59.862910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:20.551 [2024-07-10 15:27:59.862935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:20.551 [2024-07-10 15:27:59.862992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:04:20.551 [2024-07-10 15:27:59.862996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:20.551 15:27:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:20.551 15:27:59 -- common/autotest_common.sh@852 -- # return 0 00:04:20.551 15:27:59 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:20.551 15:27:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.551 15:27:59 -- common/autotest_common.sh@10 -- # set +x 00:04:20.551 POWER: Env isn't set yet! 00:04:20.551 POWER: Attempting to initialise ACPI cpufreq power management... 00:04:20.551 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:04:20.551 POWER: Cannot get available frequencies of lcore 0 00:04:20.551 POWER: Attempting to initialise PSTAT power management... 00:04:20.551 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:04:20.551 POWER: Initialized successfully for lcore 0 power management 00:04:20.551 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:04:20.551 POWER: Initialized successfully for lcore 1 power management 00:04:20.809 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:04:20.809 POWER: Initialized successfully for lcore 2 power management 00:04:20.810 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:04:20.810 POWER: Initialized successfully for lcore 3 power management 00:04:20.810 [2024-07-10 15:27:59.948628] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:20.810 [2024-07-10 15:27:59.948647] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:20.810 [2024-07-10 15:27:59.948658] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:20.810 15:27:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.810 15:27:59 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:20.810 15:27:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.810 15:27:59 -- common/autotest_common.sh@10 -- # set +x 00:04:20.810 [2024-07-10 15:28:00.054497] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:20.810 15:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.810 15:28:00 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:20.810 15:28:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:20.810 15:28:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:20.810 15:28:00 -- common/autotest_common.sh@10 -- # set +x 00:04:20.810 ************************************ 00:04:20.810 START TEST scheduler_create_thread 00:04:20.810 ************************************ 00:04:20.810 15:28:00 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:04:20.810 15:28:00 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:20.810 15:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.810 15:28:00 -- common/autotest_common.sh@10 -- # set +x 00:04:20.810 2 00:04:20.810 15:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.810 15:28:00 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:20.810 15:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.810 15:28:00 -- common/autotest_common.sh@10 -- # set +x 00:04:20.810 3 00:04:20.810 15:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.810 15:28:00 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:20.810 15:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.810 15:28:00 -- common/autotest_common.sh@10 -- # set +x 00:04:20.810 4 00:04:20.810 15:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.810 15:28:00 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:20.810 15:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.810 15:28:00 -- common/autotest_common.sh@10 -- # set +x 00:04:20.810 5 00:04:20.810 15:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.810 15:28:00 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:20.810 15:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.810 15:28:00 -- common/autotest_common.sh@10 -- # set +x 00:04:20.810 6 00:04:20.810 15:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.810 15:28:00 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:20.810 15:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.810 15:28:00 -- common/autotest_common.sh@10 -- # set +x 00:04:20.810 7 00:04:20.810 15:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.810 15:28:00 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:20.810 15:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.810 15:28:00 -- common/autotest_common.sh@10 -- # set +x 00:04:20.810 8 00:04:20.810 15:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.810 15:28:00 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:20.810 15:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.810 15:28:00 -- common/autotest_common.sh@10 -- # set +x 00:04:20.810 9 00:04:20.810 15:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.810 15:28:00 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:20.810 15:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.810 15:28:00 -- common/autotest_common.sh@10 -- # set +x 00:04:20.810 10 00:04:20.810 15:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.810 15:28:00 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:20.810 15:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.810 15:28:00 -- common/autotest_common.sh@10 -- # set +x 00:04:20.810 15:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.810 15:28:00 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:20.810 15:28:00 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:20.810 15:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.810 15:28:00 -- common/autotest_common.sh@10 -- # set +x 00:04:20.810 15:28:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:20.810 15:28:00 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:20.810 15:28:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:20.810 15:28:00 -- common/autotest_common.sh@10 -- # set +x 00:04:22.709 15:28:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:22.709 15:28:01 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:22.709 15:28:01 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:22.709 15:28:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:22.709 15:28:01 -- common/autotest_common.sh@10 -- # set +x 00:04:23.641 15:28:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:23.641 00:04:23.641 real 0m2.621s 00:04:23.641 user 0m0.015s 00:04:23.641 sys 0m0.003s 00:04:23.641 15:28:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.641 15:28:02 -- common/autotest_common.sh@10 -- # set +x 00:04:23.641 ************************************ 00:04:23.641 END TEST scheduler_create_thread 00:04:23.641 ************************************ 00:04:23.641 15:28:02 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:23.641 15:28:02 -- scheduler/scheduler.sh@46 -- # killprocess 1995278 00:04:23.641 15:28:02 -- common/autotest_common.sh@926 -- # '[' -z 1995278 ']' 00:04:23.641 15:28:02 -- common/autotest_common.sh@930 -- # kill -0 1995278 00:04:23.641 15:28:02 -- common/autotest_common.sh@931 -- # uname 00:04:23.641 15:28:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:23.641 15:28:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1995278 00:04:23.641 15:28:02 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:04:23.641 15:28:02 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:04:23.641 15:28:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1995278' 00:04:23.641 killing process with pid 1995278 00:04:23.641 15:28:02 -- common/autotest_common.sh@945 -- # kill 1995278 00:04:23.641 15:28:02 -- common/autotest_common.sh@950 -- # wait 1995278 00:04:23.898 [2024-07-10 15:28:03.161851] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:24.156 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:04:24.156 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:04:24.156 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:04:24.156 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:04:24.156 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:04:24.156 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:04:24.156 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:04:24.156 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:04:24.156 00:04:24.156 real 0m3.815s 00:04:24.156 user 0m5.743s 00:04:24.156 sys 0m0.304s 00:04:24.156 15:28:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:24.156 15:28:03 -- common/autotest_common.sh@10 -- # set +x 00:04:24.156 ************************************ 00:04:24.156 END TEST event_scheduler 00:04:24.156 ************************************ 00:04:24.156 15:28:03 -- event/event.sh@51 -- # modprobe -n nbd 00:04:24.156 15:28:03 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:24.156 15:28:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:24.156 15:28:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:24.156 15:28:03 -- common/autotest_common.sh@10 -- # set +x 00:04:24.156 ************************************ 00:04:24.156 START TEST app_repeat 00:04:24.156 ************************************ 00:04:24.156 15:28:03 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:04:24.156 15:28:03 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:24.156 15:28:03 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:24.156 15:28:03 -- event/event.sh@13 -- # local nbd_list 00:04:24.156 15:28:03 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:24.156 15:28:03 -- event/event.sh@14 -- # local bdev_list 00:04:24.156 15:28:03 -- event/event.sh@15 -- # local repeat_times=4 00:04:24.156 15:28:03 -- event/event.sh@17 -- # modprobe nbd 00:04:24.156 15:28:03 -- event/event.sh@19 -- # repeat_pid=1995750 00:04:24.156 15:28:03 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:24.156 15:28:03 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.156 15:28:03 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1995750' 00:04:24.156 Process app_repeat pid: 1995750 00:04:24.156 15:28:03 -- event/event.sh@23 -- # for i in {0..2} 00:04:24.156 15:28:03 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:24.156 spdk_app_start Round 0 00:04:24.156 15:28:03 -- event/event.sh@25 -- # waitforlisten 1995750 /var/tmp/spdk-nbd.sock 00:04:24.156 15:28:03 -- common/autotest_common.sh@819 -- # '[' -z 1995750 ']' 00:04:24.156 15:28:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:24.156 15:28:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:24.156 15:28:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:24.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:24.156 15:28:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:24.156 15:28:03 -- common/autotest_common.sh@10 -- # set +x 00:04:24.156 [2024-07-10 15:28:03.477402] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:24.156 [2024-07-10 15:28:03.477513] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1995750 ] 00:04:24.156 EAL: No free 2048 kB hugepages reported on node 1 00:04:24.414 [2024-07-10 15:28:03.542610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:24.414 [2024-07-10 15:28:03.655533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:24.414 [2024-07-10 15:28:03.655533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.347 15:28:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:25.347 15:28:04 -- common/autotest_common.sh@852 -- # return 0 00:04:25.348 15:28:04 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:25.348 Malloc0 00:04:25.348 15:28:04 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:25.605 Malloc1 00:04:25.605 15:28:04 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:25.605 15:28:04 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.605 15:28:04 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.605 15:28:04 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:25.605 15:28:04 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.605 15:28:04 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:25.605 15:28:04 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:25.605 15:28:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:25.605 15:28:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:25.605 15:28:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:25.605 15:28:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:25.605 15:28:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:25.605 15:28:04 -- bdev/nbd_common.sh@12 -- # local i 00:04:25.605 15:28:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:25.605 15:28:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.605 15:28:04 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:25.861 /dev/nbd0 00:04:25.861 15:28:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:25.861 15:28:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:25.861 15:28:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:25.861 15:28:05 -- common/autotest_common.sh@857 -- # local i 00:04:25.861 15:28:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:25.861 15:28:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:25.861 15:28:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:25.861 15:28:05 -- common/autotest_common.sh@861 -- # break 00:04:25.861 15:28:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:25.861 15:28:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:25.861 15:28:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:25.861 1+0 records in 00:04:25.861 1+0 records out 00:04:25.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217396 s, 18.8 MB/s 00:04:25.862 15:28:05 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.862 15:28:05 -- common/autotest_common.sh@874 -- # size=4096 00:04:25.862 15:28:05 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:25.862 15:28:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:25.862 15:28:05 -- common/autotest_common.sh@877 -- # return 0 00:04:25.862 15:28:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:25.862 15:28:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:25.862 15:28:05 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:26.118 /dev/nbd1 00:04:26.118 15:28:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:26.118 15:28:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:26.118 15:28:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:26.118 15:28:05 -- common/autotest_common.sh@857 -- # local i 00:04:26.118 15:28:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:26.118 15:28:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:26.118 15:28:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:26.118 15:28:05 -- common/autotest_common.sh@861 -- # break 00:04:26.118 15:28:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:26.119 15:28:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:26.119 15:28:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:26.119 1+0 records in 00:04:26.119 1+0 records out 00:04:26.119 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000166319 s, 24.6 MB/s 00:04:26.119 15:28:05 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.119 15:28:05 -- common/autotest_common.sh@874 -- # size=4096 00:04:26.119 15:28:05 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:26.119 15:28:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:26.119 15:28:05 -- common/autotest_common.sh@877 -- # return 0 00:04:26.119 15:28:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:26.119 15:28:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:26.119 15:28:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:26.119 15:28:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.119 15:28:05 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:26.377 15:28:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:26.377 { 00:04:26.377 "nbd_device": "/dev/nbd0", 00:04:26.377 "bdev_name": "Malloc0" 00:04:26.377 }, 00:04:26.377 { 00:04:26.377 "nbd_device": "/dev/nbd1", 00:04:26.377 "bdev_name": "Malloc1" 00:04:26.377 } 00:04:26.377 ]' 00:04:26.377 15:28:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:26.377 { 00:04:26.377 "nbd_device": "/dev/nbd0", 00:04:26.377 "bdev_name": "Malloc0" 00:04:26.377 }, 00:04:26.377 { 00:04:26.377 "nbd_device": "/dev/nbd1", 00:04:26.377 "bdev_name": "Malloc1" 00:04:26.377 } 00:04:26.377 ]' 00:04:26.377 15:28:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:26.377 15:28:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:26.377 /dev/nbd1' 00:04:26.377 15:28:05 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:26.377 /dev/nbd1' 00:04:26.377 15:28:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:26.377 15:28:05 -- bdev/nbd_common.sh@65 -- # count=2 00:04:26.377 15:28:05 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:26.377 15:28:05 -- bdev/nbd_common.sh@95 -- # count=2 00:04:26.377 15:28:05 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:26.377 15:28:05 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:26.377 15:28:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.377 15:28:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:26.377 15:28:05 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:26.377 15:28:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.377 15:28:05 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:26.377 15:28:05 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:26.377 256+0 records in 00:04:26.377 256+0 records out 00:04:26.377 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498334 s, 210 MB/s 00:04:26.377 15:28:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:26.377 15:28:05 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:26.635 256+0 records in 00:04:26.635 256+0 records out 00:04:26.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238495 s, 44.0 MB/s 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:26.635 256+0 records in 00:04:26.635 256+0 records out 00:04:26.635 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255662 s, 41.0 MB/s 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@51 -- # local i 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:26.635 15:28:05 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:26.893 15:28:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:26.893 15:28:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:26.893 15:28:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:26.893 15:28:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:26.893 15:28:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:26.893 15:28:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:26.893 15:28:06 -- bdev/nbd_common.sh@41 -- # break 00:04:26.893 15:28:06 -- bdev/nbd_common.sh@45 -- # return 0 00:04:26.893 15:28:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:26.893 15:28:06 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:27.150 15:28:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:27.150 15:28:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:27.150 15:28:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:27.150 15:28:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:27.150 15:28:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:27.150 15:28:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:27.150 15:28:06 -- bdev/nbd_common.sh@41 -- # break 00:04:27.150 15:28:06 -- bdev/nbd_common.sh@45 -- # return 0 00:04:27.150 15:28:06 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:27.150 15:28:06 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:27.150 15:28:06 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:27.408 15:28:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:27.408 15:28:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:27.408 15:28:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:27.408 15:28:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:27.408 15:28:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:27.408 15:28:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:27.408 15:28:06 -- bdev/nbd_common.sh@65 -- # true 00:04:27.408 15:28:06 -- bdev/nbd_common.sh@65 -- # count=0 00:04:27.408 15:28:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:27.408 15:28:06 -- bdev/nbd_common.sh@104 -- # count=0 00:04:27.408 15:28:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:27.408 15:28:06 -- bdev/nbd_common.sh@109 -- # return 0 00:04:27.408 15:28:06 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:27.666 15:28:06 -- event/event.sh@35 -- # sleep 3 00:04:27.923 [2024-07-10 15:28:07.163902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:27.923 [2024-07-10 15:28:07.275860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:27.923 [2024-07-10 15:28:07.275860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:28.181 [2024-07-10 15:28:07.337819] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:28.181 [2024-07-10 15:28:07.337892] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:30.726 15:28:09 -- event/event.sh@23 -- # for i in {0..2} 00:04:30.726 15:28:09 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:30.726 spdk_app_start Round 1 00:04:30.726 15:28:09 -- event/event.sh@25 -- # waitforlisten 1995750 /var/tmp/spdk-nbd.sock 00:04:30.726 15:28:09 -- common/autotest_common.sh@819 -- # '[' -z 1995750 ']' 00:04:30.726 15:28:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:30.726 15:28:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:30.726 15:28:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:30.726 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:30.726 15:28:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:30.726 15:28:09 -- common/autotest_common.sh@10 -- # set +x 00:04:30.985 15:28:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:30.985 15:28:10 -- common/autotest_common.sh@852 -- # return 0 00:04:30.985 15:28:10 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:31.243 Malloc0 00:04:31.243 15:28:10 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:31.501 Malloc1 00:04:31.501 15:28:10 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:31.501 15:28:10 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.501 15:28:10 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.501 15:28:10 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:31.501 15:28:10 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.501 15:28:10 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:31.501 15:28:10 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:31.501 15:28:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:31.501 15:28:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:31.501 15:28:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:31.501 15:28:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:31.501 15:28:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:31.501 15:28:10 -- bdev/nbd_common.sh@12 -- # local i 00:04:31.501 15:28:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:31.501 15:28:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.501 15:28:10 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:31.759 /dev/nbd0 00:04:31.759 15:28:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:31.759 15:28:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:31.759 15:28:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:31.759 15:28:10 -- common/autotest_common.sh@857 -- # local i 00:04:31.759 15:28:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:31.759 15:28:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:31.759 15:28:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:31.759 15:28:10 -- common/autotest_common.sh@861 -- # break 00:04:31.759 15:28:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:31.759 15:28:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:31.759 15:28:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:31.759 1+0 records in 00:04:31.759 1+0 records out 00:04:31.759 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000151913 s, 27.0 MB/s 00:04:31.759 15:28:10 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.759 15:28:10 -- common/autotest_common.sh@874 -- # size=4096 00:04:31.759 15:28:10 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:31.759 15:28:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:31.759 15:28:10 -- common/autotest_common.sh@877 -- # return 0 00:04:31.759 15:28:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:31.759 15:28:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:31.759 15:28:10 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:32.017 /dev/nbd1 00:04:32.017 15:28:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:32.017 15:28:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:32.017 15:28:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:32.017 15:28:11 -- common/autotest_common.sh@857 -- # local i 00:04:32.017 15:28:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:32.017 15:28:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:32.017 15:28:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:32.017 15:28:11 -- common/autotest_common.sh@861 -- # break 00:04:32.017 15:28:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:32.017 15:28:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:32.017 15:28:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:32.017 1+0 records in 00:04:32.017 1+0 records out 00:04:32.017 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194309 s, 21.1 MB/s 00:04:32.017 15:28:11 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.017 15:28:11 -- common/autotest_common.sh@874 -- # size=4096 00:04:32.017 15:28:11 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:32.017 15:28:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:32.017 15:28:11 -- common/autotest_common.sh@877 -- # return 0 00:04:32.017 15:28:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:32.017 15:28:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:32.017 15:28:11 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.017 15:28:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.017 15:28:11 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:32.275 15:28:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:32.275 { 00:04:32.275 "nbd_device": "/dev/nbd0", 00:04:32.275 "bdev_name": "Malloc0" 00:04:32.275 }, 00:04:32.275 { 00:04:32.275 "nbd_device": "/dev/nbd1", 00:04:32.275 "bdev_name": "Malloc1" 00:04:32.275 } 00:04:32.275 ]' 00:04:32.275 15:28:11 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:32.275 { 00:04:32.275 "nbd_device": "/dev/nbd0", 00:04:32.275 "bdev_name": "Malloc0" 00:04:32.275 }, 00:04:32.275 { 00:04:32.275 "nbd_device": "/dev/nbd1", 00:04:32.275 "bdev_name": "Malloc1" 00:04:32.275 } 00:04:32.275 ]' 00:04:32.275 15:28:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:32.275 15:28:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:32.275 /dev/nbd1' 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:32.276 /dev/nbd1' 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@65 -- # count=2 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@95 -- # count=2 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:32.276 256+0 records in 00:04:32.276 256+0 records out 00:04:32.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00505932 s, 207 MB/s 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:32.276 256+0 records in 00:04:32.276 256+0 records out 00:04:32.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0236138 s, 44.4 MB/s 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:32.276 256+0 records in 00:04:32.276 256+0 records out 00:04:32.276 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0253912 s, 41.3 MB/s 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@51 -- # local i 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:32.276 15:28:11 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:32.534 15:28:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:32.534 15:28:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:32.534 15:28:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:32.534 15:28:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.534 15:28:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.534 15:28:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:32.534 15:28:11 -- bdev/nbd_common.sh@41 -- # break 00:04:32.534 15:28:11 -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.534 15:28:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:32.534 15:28:11 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:32.792 15:28:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:32.792 15:28:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:32.792 15:28:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:32.792 15:28:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:32.792 15:28:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:32.792 15:28:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:32.792 15:28:12 -- bdev/nbd_common.sh@41 -- # break 00:04:32.792 15:28:12 -- bdev/nbd_common.sh@45 -- # return 0 00:04:32.792 15:28:12 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:32.792 15:28:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:32.792 15:28:12 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:33.050 15:28:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:33.050 15:28:12 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:33.050 15:28:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:33.050 15:28:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:33.050 15:28:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:33.050 15:28:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:33.050 15:28:12 -- bdev/nbd_common.sh@65 -- # true 00:04:33.050 15:28:12 -- bdev/nbd_common.sh@65 -- # count=0 00:04:33.050 15:28:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:33.050 15:28:12 -- bdev/nbd_common.sh@104 -- # count=0 00:04:33.050 15:28:12 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:33.050 15:28:12 -- bdev/nbd_common.sh@109 -- # return 0 00:04:33.050 15:28:12 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:33.309 15:28:12 -- event/event.sh@35 -- # sleep 3 00:04:33.567 [2024-07-10 15:28:12.899080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:33.824 [2024-07-10 15:28:13.013558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:33.824 [2024-07-10 15:28:13.013562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:33.824 [2024-07-10 15:28:13.074948] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:33.824 [2024-07-10 15:28:13.075027] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:36.349 15:28:15 -- event/event.sh@23 -- # for i in {0..2} 00:04:36.349 15:28:15 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:36.349 spdk_app_start Round 2 00:04:36.349 15:28:15 -- event/event.sh@25 -- # waitforlisten 1995750 /var/tmp/spdk-nbd.sock 00:04:36.349 15:28:15 -- common/autotest_common.sh@819 -- # '[' -z 1995750 ']' 00:04:36.349 15:28:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:36.349 15:28:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:36.349 15:28:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:36.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:36.349 15:28:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:36.349 15:28:15 -- common/autotest_common.sh@10 -- # set +x 00:04:36.606 15:28:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:36.606 15:28:15 -- common/autotest_common.sh@852 -- # return 0 00:04:36.606 15:28:15 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:36.864 Malloc0 00:04:36.864 15:28:16 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:37.122 Malloc1 00:04:37.122 15:28:16 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:37.122 15:28:16 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.122 15:28:16 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.122 15:28:16 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:37.122 15:28:16 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.122 15:28:16 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:37.122 15:28:16 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:37.122 15:28:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.122 15:28:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:37.122 15:28:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:37.122 15:28:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.122 15:28:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:37.122 15:28:16 -- bdev/nbd_common.sh@12 -- # local i 00:04:37.122 15:28:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:37.122 15:28:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.122 15:28:16 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:37.381 /dev/nbd0 00:04:37.381 15:28:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:37.381 15:28:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:37.381 15:28:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:04:37.381 15:28:16 -- common/autotest_common.sh@857 -- # local i 00:04:37.381 15:28:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:37.381 15:28:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:37.381 15:28:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:04:37.381 15:28:16 -- common/autotest_common.sh@861 -- # break 00:04:37.381 15:28:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:37.381 15:28:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:37.381 15:28:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:37.381 1+0 records in 00:04:37.381 1+0 records out 00:04:37.381 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192146 s, 21.3 MB/s 00:04:37.381 15:28:16 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.381 15:28:16 -- common/autotest_common.sh@874 -- # size=4096 00:04:37.381 15:28:16 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.381 15:28:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:37.381 15:28:16 -- common/autotest_common.sh@877 -- # return 0 00:04:37.381 15:28:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:37.381 15:28:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.381 15:28:16 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:37.638 /dev/nbd1 00:04:37.638 15:28:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:37.638 15:28:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:37.638 15:28:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:04:37.638 15:28:16 -- common/autotest_common.sh@857 -- # local i 00:04:37.638 15:28:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:04:37.638 15:28:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:04:37.638 15:28:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:04:37.638 15:28:16 -- common/autotest_common.sh@861 -- # break 00:04:37.638 15:28:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:04:37.638 15:28:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:04:37.638 15:28:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:37.638 1+0 records in 00:04:37.638 1+0 records out 00:04:37.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194439 s, 21.1 MB/s 00:04:37.638 15:28:16 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.638 15:28:16 -- common/autotest_common.sh@874 -- # size=4096 00:04:37.638 15:28:16 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:37.638 15:28:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:04:37.638 15:28:16 -- common/autotest_common.sh@877 -- # return 0 00:04:37.638 15:28:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:37.638 15:28:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:37.638 15:28:16 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:37.638 15:28:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:37.638 15:28:16 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:37.896 { 00:04:37.896 "nbd_device": "/dev/nbd0", 00:04:37.896 "bdev_name": "Malloc0" 00:04:37.896 }, 00:04:37.896 { 00:04:37.896 "nbd_device": "/dev/nbd1", 00:04:37.896 "bdev_name": "Malloc1" 00:04:37.896 } 00:04:37.896 ]' 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:37.896 { 00:04:37.896 "nbd_device": "/dev/nbd0", 00:04:37.896 "bdev_name": "Malloc0" 00:04:37.896 }, 00:04:37.896 { 00:04:37.896 "nbd_device": "/dev/nbd1", 00:04:37.896 "bdev_name": "Malloc1" 00:04:37.896 } 00:04:37.896 ]' 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:37.896 /dev/nbd1' 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:37.896 /dev/nbd1' 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@65 -- # count=2 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@66 -- # echo 2 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@95 -- # count=2 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:37.896 256+0 records in 00:04:37.896 256+0 records out 00:04:37.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00496121 s, 211 MB/s 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:37.896 256+0 records in 00:04:37.896 256+0 records out 00:04:37.896 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0238999 s, 43.9 MB/s 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:37.896 15:28:17 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:38.154 256+0 records in 00:04:38.154 256+0 records out 00:04:38.154 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0257756 s, 40.7 MB/s 00:04:38.154 15:28:17 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:38.154 15:28:17 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.154 15:28:17 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:38.154 15:28:17 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:38.154 15:28:17 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.154 15:28:17 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:38.154 15:28:17 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:38.154 15:28:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:38.154 15:28:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:38.154 15:28:17 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:38.154 15:28:17 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:38.154 15:28:17 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:38.154 15:28:17 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:38.154 15:28:17 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.154 15:28:17 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:38.154 15:28:17 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:38.154 15:28:17 -- bdev/nbd_common.sh@51 -- # local i 00:04:38.154 15:28:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:38.154 15:28:17 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:38.413 15:28:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:38.413 15:28:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:38.413 15:28:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:38.413 15:28:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:38.413 15:28:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:38.413 15:28:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:38.413 15:28:17 -- bdev/nbd_common.sh@41 -- # break 00:04:38.413 15:28:17 -- bdev/nbd_common.sh@45 -- # return 0 00:04:38.413 15:28:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:38.413 15:28:17 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:38.672 15:28:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:38.672 15:28:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:38.672 15:28:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:38.672 15:28:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:38.672 15:28:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:38.672 15:28:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:38.672 15:28:17 -- bdev/nbd_common.sh@41 -- # break 00:04:38.672 15:28:17 -- bdev/nbd_common.sh@45 -- # return 0 00:04:38.672 15:28:17 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:38.672 15:28:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:38.672 15:28:17 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:38.672 15:28:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:38.672 15:28:18 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:38.672 15:28:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:38.930 15:28:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:38.930 15:28:18 -- bdev/nbd_common.sh@65 -- # echo '' 00:04:38.930 15:28:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:38.930 15:28:18 -- bdev/nbd_common.sh@65 -- # true 00:04:38.930 15:28:18 -- bdev/nbd_common.sh@65 -- # count=0 00:04:38.930 15:28:18 -- bdev/nbd_common.sh@66 -- # echo 0 00:04:38.930 15:28:18 -- bdev/nbd_common.sh@104 -- # count=0 00:04:38.930 15:28:18 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:38.930 15:28:18 -- bdev/nbd_common.sh@109 -- # return 0 00:04:38.930 15:28:18 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:39.188 15:28:18 -- event/event.sh@35 -- # sleep 3 00:04:39.447 [2024-07-10 15:28:18.621805] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:39.447 [2024-07-10 15:28:18.735059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:39.447 [2024-07-10 15:28:18.735063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.447 [2024-07-10 15:28:18.796668] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:39.447 [2024-07-10 15:28:18.796760] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:42.039 15:28:21 -- event/event.sh@38 -- # waitforlisten 1995750 /var/tmp/spdk-nbd.sock 00:04:42.039 15:28:21 -- common/autotest_common.sh@819 -- # '[' -z 1995750 ']' 00:04:42.039 15:28:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:42.039 15:28:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:42.039 15:28:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:42.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:42.039 15:28:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:42.039 15:28:21 -- common/autotest_common.sh@10 -- # set +x 00:04:42.297 15:28:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:42.297 15:28:21 -- common/autotest_common.sh@852 -- # return 0 00:04:42.297 15:28:21 -- event/event.sh@39 -- # killprocess 1995750 00:04:42.297 15:28:21 -- common/autotest_common.sh@926 -- # '[' -z 1995750 ']' 00:04:42.297 15:28:21 -- common/autotest_common.sh@930 -- # kill -0 1995750 00:04:42.297 15:28:21 -- common/autotest_common.sh@931 -- # uname 00:04:42.297 15:28:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:42.297 15:28:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1995750 00:04:42.297 15:28:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:42.297 15:28:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:42.297 15:28:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1995750' 00:04:42.297 killing process with pid 1995750 00:04:42.297 15:28:21 -- common/autotest_common.sh@945 -- # kill 1995750 00:04:42.297 15:28:21 -- common/autotest_common.sh@950 -- # wait 1995750 00:04:42.556 spdk_app_start is called in Round 0. 00:04:42.556 Shutdown signal received, stop current app iteration 00:04:42.556 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:04:42.556 spdk_app_start is called in Round 1. 00:04:42.556 Shutdown signal received, stop current app iteration 00:04:42.556 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:04:42.556 spdk_app_start is called in Round 2. 00:04:42.556 Shutdown signal received, stop current app iteration 00:04:42.556 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:04:42.556 spdk_app_start is called in Round 3. 00:04:42.556 Shutdown signal received, stop current app iteration 00:04:42.556 15:28:21 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:42.556 15:28:21 -- event/event.sh@42 -- # return 0 00:04:42.556 00:04:42.556 real 0m18.414s 00:04:42.556 user 0m39.708s 00:04:42.556 sys 0m3.194s 00:04:42.556 15:28:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.556 15:28:21 -- common/autotest_common.sh@10 -- # set +x 00:04:42.556 ************************************ 00:04:42.556 END TEST app_repeat 00:04:42.556 ************************************ 00:04:42.556 15:28:21 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:42.556 15:28:21 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:42.556 15:28:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.556 15:28:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.556 15:28:21 -- common/autotest_common.sh@10 -- # set +x 00:04:42.556 ************************************ 00:04:42.556 START TEST cpu_locks 00:04:42.556 ************************************ 00:04:42.556 15:28:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:42.816 * Looking for test storage... 00:04:42.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:42.816 15:28:21 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:42.816 15:28:21 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:42.816 15:28:21 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:42.816 15:28:21 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:42.816 15:28:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.816 15:28:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.816 15:28:21 -- common/autotest_common.sh@10 -- # set +x 00:04:42.816 ************************************ 00:04:42.816 START TEST default_locks 00:04:42.816 ************************************ 00:04:42.816 15:28:21 -- common/autotest_common.sh@1104 -- # default_locks 00:04:42.816 15:28:21 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1998288 00:04:42.816 15:28:21 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:42.816 15:28:21 -- event/cpu_locks.sh@47 -- # waitforlisten 1998288 00:04:42.816 15:28:21 -- common/autotest_common.sh@819 -- # '[' -z 1998288 ']' 00:04:42.816 15:28:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.816 15:28:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:42.816 15:28:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.816 15:28:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:42.816 15:28:21 -- common/autotest_common.sh@10 -- # set +x 00:04:42.816 [2024-07-10 15:28:21.997068] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:42.816 [2024-07-10 15:28:21.997143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1998288 ] 00:04:42.816 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.816 [2024-07-10 15:28:22.054681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.816 [2024-07-10 15:28:22.157917] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:42.816 [2024-07-10 15:28:22.158076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.750 15:28:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:43.750 15:28:22 -- common/autotest_common.sh@852 -- # return 0 00:04:43.750 15:28:22 -- event/cpu_locks.sh@49 -- # locks_exist 1998288 00:04:43.750 15:28:22 -- event/cpu_locks.sh@22 -- # lslocks -p 1998288 00:04:43.750 15:28:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:44.007 lslocks: write error 00:04:44.007 15:28:23 -- event/cpu_locks.sh@50 -- # killprocess 1998288 00:04:44.007 15:28:23 -- common/autotest_common.sh@926 -- # '[' -z 1998288 ']' 00:04:44.007 15:28:23 -- common/autotest_common.sh@930 -- # kill -0 1998288 00:04:44.007 15:28:23 -- common/autotest_common.sh@931 -- # uname 00:04:44.007 15:28:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:44.007 15:28:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1998288 00:04:44.007 15:28:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:44.007 15:28:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:44.007 15:28:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1998288' 00:04:44.007 killing process with pid 1998288 00:04:44.007 15:28:23 -- common/autotest_common.sh@945 -- # kill 1998288 00:04:44.007 15:28:23 -- common/autotest_common.sh@950 -- # wait 1998288 00:04:44.571 15:28:23 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1998288 00:04:44.571 15:28:23 -- common/autotest_common.sh@640 -- # local es=0 00:04:44.571 15:28:23 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1998288 00:04:44.571 15:28:23 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:04:44.571 15:28:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:44.571 15:28:23 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:04:44.571 15:28:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:44.571 15:28:23 -- common/autotest_common.sh@643 -- # waitforlisten 1998288 00:04:44.571 15:28:23 -- common/autotest_common.sh@819 -- # '[' -z 1998288 ']' 00:04:44.571 15:28:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.571 15:28:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:44.571 15:28:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.571 15:28:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:44.571 15:28:23 -- common/autotest_common.sh@10 -- # set +x 00:04:44.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1998288) - No such process 00:04:44.571 ERROR: process (pid: 1998288) is no longer running 00:04:44.571 15:28:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:44.571 15:28:23 -- common/autotest_common.sh@852 -- # return 1 00:04:44.571 15:28:23 -- common/autotest_common.sh@643 -- # es=1 00:04:44.571 15:28:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:44.571 15:28:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:44.571 15:28:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:44.571 15:28:23 -- event/cpu_locks.sh@54 -- # no_locks 00:04:44.571 15:28:23 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:44.571 15:28:23 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:44.572 15:28:23 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:44.572 00:04:44.572 real 0m1.731s 00:04:44.572 user 0m1.873s 00:04:44.572 sys 0m0.560s 00:04:44.572 15:28:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.572 15:28:23 -- common/autotest_common.sh@10 -- # set +x 00:04:44.572 ************************************ 00:04:44.572 END TEST default_locks 00:04:44.572 ************************************ 00:04:44.572 15:28:23 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:44.572 15:28:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:44.572 15:28:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:44.572 15:28:23 -- common/autotest_common.sh@10 -- # set +x 00:04:44.572 ************************************ 00:04:44.572 START TEST default_locks_via_rpc 00:04:44.572 ************************************ 00:04:44.572 15:28:23 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:04:44.572 15:28:23 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1998466 00:04:44.572 15:28:23 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:44.572 15:28:23 -- event/cpu_locks.sh@63 -- # waitforlisten 1998466 00:04:44.572 15:28:23 -- common/autotest_common.sh@819 -- # '[' -z 1998466 ']' 00:04:44.572 15:28:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.572 15:28:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:44.572 15:28:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.572 15:28:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:44.572 15:28:23 -- common/autotest_common.sh@10 -- # set +x 00:04:44.572 [2024-07-10 15:28:23.751396] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:44.572 [2024-07-10 15:28:23.751514] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1998466 ] 00:04:44.572 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.572 [2024-07-10 15:28:23.807957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.572 [2024-07-10 15:28:23.917114] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:44.572 [2024-07-10 15:28:23.917261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:45.505 15:28:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:45.505 15:28:24 -- common/autotest_common.sh@852 -- # return 0 00:04:45.505 15:28:24 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:45.505 15:28:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:45.505 15:28:24 -- common/autotest_common.sh@10 -- # set +x 00:04:45.505 15:28:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:45.505 15:28:24 -- event/cpu_locks.sh@67 -- # no_locks 00:04:45.505 15:28:24 -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:45.505 15:28:24 -- event/cpu_locks.sh@26 -- # local lock_files 00:04:45.505 15:28:24 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:45.505 15:28:24 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:45.505 15:28:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:04:45.505 15:28:24 -- common/autotest_common.sh@10 -- # set +x 00:04:45.505 15:28:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:04:45.505 15:28:24 -- event/cpu_locks.sh@71 -- # locks_exist 1998466 00:04:45.505 15:28:24 -- event/cpu_locks.sh@22 -- # lslocks -p 1998466 00:04:45.505 15:28:24 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:45.762 15:28:24 -- event/cpu_locks.sh@73 -- # killprocess 1998466 00:04:45.762 15:28:24 -- common/autotest_common.sh@926 -- # '[' -z 1998466 ']' 00:04:45.762 15:28:24 -- common/autotest_common.sh@930 -- # kill -0 1998466 00:04:45.762 15:28:24 -- common/autotest_common.sh@931 -- # uname 00:04:45.762 15:28:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:45.762 15:28:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1998466 00:04:45.762 15:28:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:45.762 15:28:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:45.762 15:28:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1998466' 00:04:45.762 killing process with pid 1998466 00:04:45.762 15:28:24 -- common/autotest_common.sh@945 -- # kill 1998466 00:04:45.762 15:28:24 -- common/autotest_common.sh@950 -- # wait 1998466 00:04:46.328 00:04:46.328 real 0m1.739s 00:04:46.328 user 0m1.868s 00:04:46.328 sys 0m0.552s 00:04:46.328 15:28:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.328 15:28:25 -- common/autotest_common.sh@10 -- # set +x 00:04:46.328 ************************************ 00:04:46.328 END TEST default_locks_via_rpc 00:04:46.328 ************************************ 00:04:46.328 15:28:25 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:46.328 15:28:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.328 15:28:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.328 15:28:25 -- common/autotest_common.sh@10 -- # set +x 00:04:46.328 ************************************ 00:04:46.328 START TEST non_locking_app_on_locked_coremask 00:04:46.328 ************************************ 00:04:46.328 15:28:25 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:04:46.328 15:28:25 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1998760 00:04:46.328 15:28:25 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:46.328 15:28:25 -- event/cpu_locks.sh@81 -- # waitforlisten 1998760 /var/tmp/spdk.sock 00:04:46.328 15:28:25 -- common/autotest_common.sh@819 -- # '[' -z 1998760 ']' 00:04:46.328 15:28:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:46.328 15:28:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:46.328 15:28:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:46.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:46.328 15:28:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:46.328 15:28:25 -- common/autotest_common.sh@10 -- # set +x 00:04:46.328 [2024-07-10 15:28:25.516700] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:46.328 [2024-07-10 15:28:25.516814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1998760 ] 00:04:46.328 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.328 [2024-07-10 15:28:25.572930] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.328 [2024-07-10 15:28:25.678108] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:46.328 [2024-07-10 15:28:25.678265] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.263 15:28:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:47.263 15:28:26 -- common/autotest_common.sh@852 -- # return 0 00:04:47.263 15:28:26 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1998900 00:04:47.263 15:28:26 -- event/cpu_locks.sh@85 -- # waitforlisten 1998900 /var/tmp/spdk2.sock 00:04:47.263 15:28:26 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:47.263 15:28:26 -- common/autotest_common.sh@819 -- # '[' -z 1998900 ']' 00:04:47.263 15:28:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:47.263 15:28:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:47.263 15:28:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:47.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:47.263 15:28:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:47.263 15:28:26 -- common/autotest_common.sh@10 -- # set +x 00:04:47.263 [2024-07-10 15:28:26.537653] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:47.263 [2024-07-10 15:28:26.537739] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1998900 ] 00:04:47.263 EAL: No free 2048 kB hugepages reported on node 1 00:04:47.263 [2024-07-10 15:28:26.629890] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:47.263 [2024-07-10 15:28:26.629923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:47.521 [2024-07-10 15:28:26.871045] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:47.521 [2024-07-10 15:28:26.871217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.453 15:28:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:48.453 15:28:27 -- common/autotest_common.sh@852 -- # return 0 00:04:48.453 15:28:27 -- event/cpu_locks.sh@87 -- # locks_exist 1998760 00:04:48.453 15:28:27 -- event/cpu_locks.sh@22 -- # lslocks -p 1998760 00:04:48.453 15:28:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:48.711 lslocks: write error 00:04:48.711 15:28:27 -- event/cpu_locks.sh@89 -- # killprocess 1998760 00:04:48.711 15:28:27 -- common/autotest_common.sh@926 -- # '[' -z 1998760 ']' 00:04:48.711 15:28:27 -- common/autotest_common.sh@930 -- # kill -0 1998760 00:04:48.711 15:28:27 -- common/autotest_common.sh@931 -- # uname 00:04:48.711 15:28:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:48.711 15:28:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1998760 00:04:48.711 15:28:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:48.711 15:28:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:48.711 15:28:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1998760' 00:04:48.711 killing process with pid 1998760 00:04:48.711 15:28:27 -- common/autotest_common.sh@945 -- # kill 1998760 00:04:48.711 15:28:27 -- common/autotest_common.sh@950 -- # wait 1998760 00:04:49.644 15:28:28 -- event/cpu_locks.sh@90 -- # killprocess 1998900 00:04:49.644 15:28:28 -- common/autotest_common.sh@926 -- # '[' -z 1998900 ']' 00:04:49.644 15:28:28 -- common/autotest_common.sh@930 -- # kill -0 1998900 00:04:49.644 15:28:28 -- common/autotest_common.sh@931 -- # uname 00:04:49.644 15:28:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:49.644 15:28:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1998900 00:04:49.644 15:28:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:49.644 15:28:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:49.644 15:28:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1998900' 00:04:49.644 killing process with pid 1998900 00:04:49.644 15:28:28 -- common/autotest_common.sh@945 -- # kill 1998900 00:04:49.644 15:28:28 -- common/autotest_common.sh@950 -- # wait 1998900 00:04:50.210 00:04:50.210 real 0m3.871s 00:04:50.210 user 0m4.214s 00:04:50.210 sys 0m1.090s 00:04:50.210 15:28:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.210 15:28:29 -- common/autotest_common.sh@10 -- # set +x 00:04:50.210 ************************************ 00:04:50.210 END TEST non_locking_app_on_locked_coremask 00:04:50.210 ************************************ 00:04:50.210 15:28:29 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:50.210 15:28:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.210 15:28:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.210 15:28:29 -- common/autotest_common.sh@10 -- # set +x 00:04:50.210 ************************************ 00:04:50.210 START TEST locking_app_on_unlocked_coremask 00:04:50.210 ************************************ 00:04:50.210 15:28:29 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:04:50.210 15:28:29 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1999217 00:04:50.210 15:28:29 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:50.210 15:28:29 -- event/cpu_locks.sh@99 -- # waitforlisten 1999217 /var/tmp/spdk.sock 00:04:50.210 15:28:29 -- common/autotest_common.sh@819 -- # '[' -z 1999217 ']' 00:04:50.210 15:28:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.210 15:28:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:50.210 15:28:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.210 15:28:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:50.210 15:28:29 -- common/autotest_common.sh@10 -- # set +x 00:04:50.210 [2024-07-10 15:28:29.414946] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:50.210 [2024-07-10 15:28:29.415024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1999217 ] 00:04:50.210 EAL: No free 2048 kB hugepages reported on node 1 00:04:50.210 [2024-07-10 15:28:29.471766] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:50.210 [2024-07-10 15:28:29.471803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.210 [2024-07-10 15:28:29.580965] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:50.210 [2024-07-10 15:28:29.581119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.143 15:28:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:51.143 15:28:30 -- common/autotest_common.sh@852 -- # return 0 00:04:51.143 15:28:30 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1999359 00:04:51.143 15:28:30 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:51.143 15:28:30 -- event/cpu_locks.sh@103 -- # waitforlisten 1999359 /var/tmp/spdk2.sock 00:04:51.143 15:28:30 -- common/autotest_common.sh@819 -- # '[' -z 1999359 ']' 00:04:51.143 15:28:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:51.143 15:28:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:51.143 15:28:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:51.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:51.143 15:28:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:51.143 15:28:30 -- common/autotest_common.sh@10 -- # set +x 00:04:51.143 [2024-07-10 15:28:30.387352] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:51.143 [2024-07-10 15:28:30.387438] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1999359 ] 00:04:51.143 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.143 [2024-07-10 15:28:30.478450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.400 [2024-07-10 15:28:30.711039] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:51.400 [2024-07-10 15:28:30.711218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.966 15:28:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:51.966 15:28:31 -- common/autotest_common.sh@852 -- # return 0 00:04:51.966 15:28:31 -- event/cpu_locks.sh@105 -- # locks_exist 1999359 00:04:51.966 15:28:31 -- event/cpu_locks.sh@22 -- # lslocks -p 1999359 00:04:51.966 15:28:31 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:52.531 lslocks: write error 00:04:52.531 15:28:31 -- event/cpu_locks.sh@107 -- # killprocess 1999217 00:04:52.531 15:28:31 -- common/autotest_common.sh@926 -- # '[' -z 1999217 ']' 00:04:52.531 15:28:31 -- common/autotest_common.sh@930 -- # kill -0 1999217 00:04:52.531 15:28:31 -- common/autotest_common.sh@931 -- # uname 00:04:52.531 15:28:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:52.531 15:28:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1999217 00:04:52.531 15:28:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:52.531 15:28:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:52.531 15:28:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1999217' 00:04:52.531 killing process with pid 1999217 00:04:52.531 15:28:31 -- common/autotest_common.sh@945 -- # kill 1999217 00:04:52.531 15:28:31 -- common/autotest_common.sh@950 -- # wait 1999217 00:04:53.464 15:28:32 -- event/cpu_locks.sh@108 -- # killprocess 1999359 00:04:53.465 15:28:32 -- common/autotest_common.sh@926 -- # '[' -z 1999359 ']' 00:04:53.465 15:28:32 -- common/autotest_common.sh@930 -- # kill -0 1999359 00:04:53.465 15:28:32 -- common/autotest_common.sh@931 -- # uname 00:04:53.465 15:28:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:53.465 15:28:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1999359 00:04:53.465 15:28:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:53.465 15:28:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:53.465 15:28:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1999359' 00:04:53.465 killing process with pid 1999359 00:04:53.465 15:28:32 -- common/autotest_common.sh@945 -- # kill 1999359 00:04:53.465 15:28:32 -- common/autotest_common.sh@950 -- # wait 1999359 00:04:54.030 00:04:54.030 real 0m3.753s 00:04:54.030 user 0m4.035s 00:04:54.030 sys 0m1.043s 00:04:54.030 15:28:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.030 15:28:33 -- common/autotest_common.sh@10 -- # set +x 00:04:54.030 ************************************ 00:04:54.030 END TEST locking_app_on_unlocked_coremask 00:04:54.030 ************************************ 00:04:54.030 15:28:33 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:54.030 15:28:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.030 15:28:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.030 15:28:33 -- common/autotest_common.sh@10 -- # set +x 00:04:54.030 ************************************ 00:04:54.030 START TEST locking_app_on_locked_coremask 00:04:54.030 ************************************ 00:04:54.030 15:28:33 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:04:54.030 15:28:33 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1999736 00:04:54.030 15:28:33 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:54.030 15:28:33 -- event/cpu_locks.sh@116 -- # waitforlisten 1999736 /var/tmp/spdk.sock 00:04:54.030 15:28:33 -- common/autotest_common.sh@819 -- # '[' -z 1999736 ']' 00:04:54.030 15:28:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.030 15:28:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:54.030 15:28:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.030 15:28:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:54.030 15:28:33 -- common/autotest_common.sh@10 -- # set +x 00:04:54.030 [2024-07-10 15:28:33.194214] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:54.030 [2024-07-10 15:28:33.194303] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1999736 ] 00:04:54.030 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.030 [2024-07-10 15:28:33.260001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.030 [2024-07-10 15:28:33.368922] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:54.030 [2024-07-10 15:28:33.369081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.963 15:28:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:54.963 15:28:34 -- common/autotest_common.sh@852 -- # return 0 00:04:54.963 15:28:34 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1999805 00:04:54.963 15:28:34 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:54.963 15:28:34 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1999805 /var/tmp/spdk2.sock 00:04:54.963 15:28:34 -- common/autotest_common.sh@640 -- # local es=0 00:04:54.963 15:28:34 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 1999805 /var/tmp/spdk2.sock 00:04:54.963 15:28:34 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:04:54.963 15:28:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:54.963 15:28:34 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:04:54.963 15:28:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:54.963 15:28:34 -- common/autotest_common.sh@643 -- # waitforlisten 1999805 /var/tmp/spdk2.sock 00:04:54.963 15:28:34 -- common/autotest_common.sh@819 -- # '[' -z 1999805 ']' 00:04:54.963 15:28:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:54.963 15:28:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:54.963 15:28:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:54.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:54.963 15:28:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:54.963 15:28:34 -- common/autotest_common.sh@10 -- # set +x 00:04:54.963 [2024-07-10 15:28:34.220999] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:54.963 [2024-07-10 15:28:34.221086] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1999805 ] 00:04:54.963 EAL: No free 2048 kB hugepages reported on node 1 00:04:54.963 [2024-07-10 15:28:34.318561] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1999736 has claimed it. 00:04:54.963 [2024-07-10 15:28:34.318613] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:55.896 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (1999805) - No such process 00:04:55.896 ERROR: process (pid: 1999805) is no longer running 00:04:55.896 15:28:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:55.896 15:28:34 -- common/autotest_common.sh@852 -- # return 1 00:04:55.896 15:28:34 -- common/autotest_common.sh@643 -- # es=1 00:04:55.896 15:28:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:55.896 15:28:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:55.896 15:28:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:55.896 15:28:34 -- event/cpu_locks.sh@122 -- # locks_exist 1999736 00:04:55.896 15:28:34 -- event/cpu_locks.sh@22 -- # lslocks -p 1999736 00:04:55.896 15:28:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:56.154 lslocks: write error 00:04:56.154 15:28:35 -- event/cpu_locks.sh@124 -- # killprocess 1999736 00:04:56.154 15:28:35 -- common/autotest_common.sh@926 -- # '[' -z 1999736 ']' 00:04:56.154 15:28:35 -- common/autotest_common.sh@930 -- # kill -0 1999736 00:04:56.154 15:28:35 -- common/autotest_common.sh@931 -- # uname 00:04:56.154 15:28:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:56.154 15:28:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1999736 00:04:56.154 15:28:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:56.154 15:28:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:56.154 15:28:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1999736' 00:04:56.154 killing process with pid 1999736 00:04:56.154 15:28:35 -- common/autotest_common.sh@945 -- # kill 1999736 00:04:56.154 15:28:35 -- common/autotest_common.sh@950 -- # wait 1999736 00:04:56.719 00:04:56.719 real 0m2.723s 00:04:56.719 user 0m3.104s 00:04:56.719 sys 0m0.713s 00:04:56.719 15:28:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.719 15:28:35 -- common/autotest_common.sh@10 -- # set +x 00:04:56.719 ************************************ 00:04:56.719 END TEST locking_app_on_locked_coremask 00:04:56.719 ************************************ 00:04:56.719 15:28:35 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:56.719 15:28:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.719 15:28:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.719 15:28:35 -- common/autotest_common.sh@10 -- # set +x 00:04:56.719 ************************************ 00:04:56.719 START TEST locking_overlapped_coremask 00:04:56.719 ************************************ 00:04:56.719 15:28:35 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:04:56.719 15:28:35 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2000106 00:04:56.719 15:28:35 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:56.719 15:28:35 -- event/cpu_locks.sh@133 -- # waitforlisten 2000106 /var/tmp/spdk.sock 00:04:56.719 15:28:35 -- common/autotest_common.sh@819 -- # '[' -z 2000106 ']' 00:04:56.719 15:28:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:56.719 15:28:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:56.719 15:28:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:56.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:56.719 15:28:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:56.719 15:28:35 -- common/autotest_common.sh@10 -- # set +x 00:04:56.719 [2024-07-10 15:28:35.945120] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:56.719 [2024-07-10 15:28:35.945234] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2000106 ] 00:04:56.719 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.719 [2024-07-10 15:28:36.008068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:56.977 [2024-07-10 15:28:36.121121] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:56.977 [2024-07-10 15:28:36.121365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.977 [2024-07-10 15:28:36.121420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:56.977 [2024-07-10 15:28:36.121423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.541 15:28:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:57.541 15:28:36 -- common/autotest_common.sh@852 -- # return 0 00:04:57.541 15:28:36 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2000247 00:04:57.541 15:28:36 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:57.541 15:28:36 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2000247 /var/tmp/spdk2.sock 00:04:57.541 15:28:36 -- common/autotest_common.sh@640 -- # local es=0 00:04:57.541 15:28:36 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2000247 /var/tmp/spdk2.sock 00:04:57.541 15:28:36 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:04:57.541 15:28:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:57.541 15:28:36 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:04:57.541 15:28:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:04:57.541 15:28:36 -- common/autotest_common.sh@643 -- # waitforlisten 2000247 /var/tmp/spdk2.sock 00:04:57.541 15:28:36 -- common/autotest_common.sh@819 -- # '[' -z 2000247 ']' 00:04:57.541 15:28:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:57.541 15:28:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:57.541 15:28:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:57.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:57.541 15:28:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:57.541 15:28:36 -- common/autotest_common.sh@10 -- # set +x 00:04:57.541 [2024-07-10 15:28:36.903668] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:57.541 [2024-07-10 15:28:36.903777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2000247 ] 00:04:57.798 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.798 [2024-07-10 15:28:36.992117] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2000106 has claimed it. 00:04:57.798 [2024-07-10 15:28:36.992183] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:58.361 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2000247) - No such process 00:04:58.362 ERROR: process (pid: 2000247) is no longer running 00:04:58.362 15:28:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:58.362 15:28:37 -- common/autotest_common.sh@852 -- # return 1 00:04:58.362 15:28:37 -- common/autotest_common.sh@643 -- # es=1 00:04:58.362 15:28:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:04:58.362 15:28:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:04:58.362 15:28:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:04:58.362 15:28:37 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:58.362 15:28:37 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:58.362 15:28:37 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:58.362 15:28:37 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:58.362 15:28:37 -- event/cpu_locks.sh@141 -- # killprocess 2000106 00:04:58.362 15:28:37 -- common/autotest_common.sh@926 -- # '[' -z 2000106 ']' 00:04:58.362 15:28:37 -- common/autotest_common.sh@930 -- # kill -0 2000106 00:04:58.362 15:28:37 -- common/autotest_common.sh@931 -- # uname 00:04:58.362 15:28:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:04:58.362 15:28:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2000106 00:04:58.362 15:28:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:04:58.362 15:28:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:04:58.362 15:28:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2000106' 00:04:58.362 killing process with pid 2000106 00:04:58.362 15:28:37 -- common/autotest_common.sh@945 -- # kill 2000106 00:04:58.362 15:28:37 -- common/autotest_common.sh@950 -- # wait 2000106 00:04:58.928 00:04:58.928 real 0m2.154s 00:04:58.928 user 0m5.989s 00:04:58.928 sys 0m0.502s 00:04:58.928 15:28:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:58.928 15:28:38 -- common/autotest_common.sh@10 -- # set +x 00:04:58.928 ************************************ 00:04:58.928 END TEST locking_overlapped_coremask 00:04:58.928 ************************************ 00:04:58.928 15:28:38 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:58.928 15:28:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:58.928 15:28:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:58.928 15:28:38 -- common/autotest_common.sh@10 -- # set +x 00:04:58.928 ************************************ 00:04:58.928 START TEST locking_overlapped_coremask_via_rpc 00:04:58.928 ************************************ 00:04:58.928 15:28:38 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:04:58.928 15:28:38 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2000411 00:04:58.928 15:28:38 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:58.928 15:28:38 -- event/cpu_locks.sh@149 -- # waitforlisten 2000411 /var/tmp/spdk.sock 00:04:58.928 15:28:38 -- common/autotest_common.sh@819 -- # '[' -z 2000411 ']' 00:04:58.928 15:28:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:58.928 15:28:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:58.928 15:28:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:58.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:58.928 15:28:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:58.928 15:28:38 -- common/autotest_common.sh@10 -- # set +x 00:04:58.928 [2024-07-10 15:28:38.121253] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:58.928 [2024-07-10 15:28:38.121332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2000411 ] 00:04:58.928 EAL: No free 2048 kB hugepages reported on node 1 00:04:58.928 [2024-07-10 15:28:38.177187] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:58.928 [2024-07-10 15:28:38.177221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:58.928 [2024-07-10 15:28:38.287980] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:04:58.928 [2024-07-10 15:28:38.288187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:58.928 [2024-07-10 15:28:38.288254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:04:58.928 [2024-07-10 15:28:38.288256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:59.860 15:28:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:04:59.860 15:28:39 -- common/autotest_common.sh@852 -- # return 0 00:04:59.860 15:28:39 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2000551 00:04:59.860 15:28:39 -- event/cpu_locks.sh@153 -- # waitforlisten 2000551 /var/tmp/spdk2.sock 00:04:59.860 15:28:39 -- common/autotest_common.sh@819 -- # '[' -z 2000551 ']' 00:04:59.860 15:28:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:59.860 15:28:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:04:59.860 15:28:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:59.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:59.860 15:28:39 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:59.860 15:28:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:04:59.860 15:28:39 -- common/autotest_common.sh@10 -- # set +x 00:04:59.860 [2024-07-10 15:28:39.109973] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:04:59.860 [2024-07-10 15:28:39.110063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2000551 ] 00:04:59.860 EAL: No free 2048 kB hugepages reported on node 1 00:04:59.860 [2024-07-10 15:28:39.199392] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:59.860 [2024-07-10 15:28:39.199448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:00.118 [2024-07-10 15:28:39.422965] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:00.118 [2024-07-10 15:28:39.423177] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.118 [2024-07-10 15:28:39.426492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:05:00.118 [2024-07-10 15:28:39.426495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.683 15:28:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:00.683 15:28:40 -- common/autotest_common.sh@852 -- # return 0 00:05:00.683 15:28:40 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:00.683 15:28:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:00.683 15:28:40 -- common/autotest_common.sh@10 -- # set +x 00:05:00.683 15:28:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:00.683 15:28:40 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:00.683 15:28:40 -- common/autotest_common.sh@640 -- # local es=0 00:05:00.683 15:28:40 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:00.683 15:28:40 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:05:00.683 15:28:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:00.683 15:28:40 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:05:00.683 15:28:40 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:00.683 15:28:40 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:00.683 15:28:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:00.683 15:28:40 -- common/autotest_common.sh@10 -- # set +x 00:05:00.941 [2024-07-10 15:28:40.061547] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2000411 has claimed it. 00:05:00.941 request: 00:05:00.941 { 00:05:00.941 "method": "framework_enable_cpumask_locks", 00:05:00.941 "req_id": 1 00:05:00.941 } 00:05:00.941 Got JSON-RPC error response 00:05:00.941 response: 00:05:00.941 { 00:05:00.941 "code": -32603, 00:05:00.941 "message": "Failed to claim CPU core: 2" 00:05:00.941 } 00:05:00.941 15:28:40 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:05:00.941 15:28:40 -- common/autotest_common.sh@643 -- # es=1 00:05:00.941 15:28:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:00.941 15:28:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:00.941 15:28:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:00.941 15:28:40 -- event/cpu_locks.sh@158 -- # waitforlisten 2000411 /var/tmp/spdk.sock 00:05:00.941 15:28:40 -- common/autotest_common.sh@819 -- # '[' -z 2000411 ']' 00:05:00.941 15:28:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.941 15:28:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:00.941 15:28:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.941 15:28:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:00.941 15:28:40 -- common/autotest_common.sh@10 -- # set +x 00:05:00.941 15:28:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:00.941 15:28:40 -- common/autotest_common.sh@852 -- # return 0 00:05:00.941 15:28:40 -- event/cpu_locks.sh@159 -- # waitforlisten 2000551 /var/tmp/spdk2.sock 00:05:00.941 15:28:40 -- common/autotest_common.sh@819 -- # '[' -z 2000551 ']' 00:05:00.941 15:28:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:00.941 15:28:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:00.941 15:28:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:00.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:00.941 15:28:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:00.941 15:28:40 -- common/autotest_common.sh@10 -- # set +x 00:05:01.198 15:28:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:01.198 15:28:40 -- common/autotest_common.sh@852 -- # return 0 00:05:01.198 15:28:40 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:01.198 15:28:40 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:01.198 15:28:40 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:01.198 15:28:40 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:01.198 00:05:01.198 real 0m2.461s 00:05:01.198 user 0m1.200s 00:05:01.198 sys 0m0.189s 00:05:01.198 15:28:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.198 15:28:40 -- common/autotest_common.sh@10 -- # set +x 00:05:01.198 ************************************ 00:05:01.198 END TEST locking_overlapped_coremask_via_rpc 00:05:01.198 ************************************ 00:05:01.198 15:28:40 -- event/cpu_locks.sh@174 -- # cleanup 00:05:01.198 15:28:40 -- event/cpu_locks.sh@15 -- # [[ -z 2000411 ]] 00:05:01.198 15:28:40 -- event/cpu_locks.sh@15 -- # killprocess 2000411 00:05:01.198 15:28:40 -- common/autotest_common.sh@926 -- # '[' -z 2000411 ']' 00:05:01.198 15:28:40 -- common/autotest_common.sh@930 -- # kill -0 2000411 00:05:01.198 15:28:40 -- common/autotest_common.sh@931 -- # uname 00:05:01.198 15:28:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:01.198 15:28:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2000411 00:05:01.456 15:28:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:01.456 15:28:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:01.456 15:28:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2000411' 00:05:01.456 killing process with pid 2000411 00:05:01.456 15:28:40 -- common/autotest_common.sh@945 -- # kill 2000411 00:05:01.456 15:28:40 -- common/autotest_common.sh@950 -- # wait 2000411 00:05:01.714 15:28:41 -- event/cpu_locks.sh@16 -- # [[ -z 2000551 ]] 00:05:01.714 15:28:41 -- event/cpu_locks.sh@16 -- # killprocess 2000551 00:05:01.714 15:28:41 -- common/autotest_common.sh@926 -- # '[' -z 2000551 ']' 00:05:01.714 15:28:41 -- common/autotest_common.sh@930 -- # kill -0 2000551 00:05:01.714 15:28:41 -- common/autotest_common.sh@931 -- # uname 00:05:01.714 15:28:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:01.714 15:28:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2000551 00:05:01.714 15:28:41 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:01.714 15:28:41 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:01.714 15:28:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2000551' 00:05:01.714 killing process with pid 2000551 00:05:01.714 15:28:41 -- common/autotest_common.sh@945 -- # kill 2000551 00:05:01.714 15:28:41 -- common/autotest_common.sh@950 -- # wait 2000551 00:05:02.278 15:28:41 -- event/cpu_locks.sh@18 -- # rm -f 00:05:02.278 15:28:41 -- event/cpu_locks.sh@1 -- # cleanup 00:05:02.278 15:28:41 -- event/cpu_locks.sh@15 -- # [[ -z 2000411 ]] 00:05:02.278 15:28:41 -- event/cpu_locks.sh@15 -- # killprocess 2000411 00:05:02.278 15:28:41 -- common/autotest_common.sh@926 -- # '[' -z 2000411 ']' 00:05:02.278 15:28:41 -- common/autotest_common.sh@930 -- # kill -0 2000411 00:05:02.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2000411) - No such process 00:05:02.278 15:28:41 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2000411 is not found' 00:05:02.278 Process with pid 2000411 is not found 00:05:02.278 15:28:41 -- event/cpu_locks.sh@16 -- # [[ -z 2000551 ]] 00:05:02.278 15:28:41 -- event/cpu_locks.sh@16 -- # killprocess 2000551 00:05:02.278 15:28:41 -- common/autotest_common.sh@926 -- # '[' -z 2000551 ']' 00:05:02.278 15:28:41 -- common/autotest_common.sh@930 -- # kill -0 2000551 00:05:02.278 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2000551) - No such process 00:05:02.278 15:28:41 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2000551 is not found' 00:05:02.278 Process with pid 2000551 is not found 00:05:02.278 15:28:41 -- event/cpu_locks.sh@18 -- # rm -f 00:05:02.278 00:05:02.278 real 0m19.620s 00:05:02.278 user 0m34.524s 00:05:02.278 sys 0m5.467s 00:05:02.278 15:28:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.278 15:28:41 -- common/autotest_common.sh@10 -- # set +x 00:05:02.278 ************************************ 00:05:02.278 END TEST cpu_locks 00:05:02.278 ************************************ 00:05:02.278 00:05:02.278 real 0m46.013s 00:05:02.278 user 1m26.759s 00:05:02.278 sys 0m9.360s 00:05:02.278 15:28:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:02.278 15:28:41 -- common/autotest_common.sh@10 -- # set +x 00:05:02.278 ************************************ 00:05:02.278 END TEST event 00:05:02.278 ************************************ 00:05:02.278 15:28:41 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:02.278 15:28:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:02.278 15:28:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:02.278 15:28:41 -- common/autotest_common.sh@10 -- # set +x 00:05:02.278 ************************************ 00:05:02.278 START TEST thread 00:05:02.278 ************************************ 00:05:02.278 15:28:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:02.278 * Looking for test storage... 00:05:02.278 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:02.278 15:28:41 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:02.278 15:28:41 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:02.278 15:28:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:02.278 15:28:41 -- common/autotest_common.sh@10 -- # set +x 00:05:02.278 ************************************ 00:05:02.278 START TEST thread_poller_perf 00:05:02.278 ************************************ 00:05:02.278 15:28:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:02.278 [2024-07-10 15:28:41.632760] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:02.278 [2024-07-10 15:28:41.632849] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2000928 ] 00:05:02.536 EAL: No free 2048 kB hugepages reported on node 1 00:05:02.536 [2024-07-10 15:28:41.694305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.536 [2024-07-10 15:28:41.807101] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.536 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:03.908 ====================================== 00:05:03.908 busy:2717557817 (cyc) 00:05:03.908 total_run_count: 280000 00:05:03.908 tsc_hz: 2700000000 (cyc) 00:05:03.908 ====================================== 00:05:03.908 poller_cost: 9705 (cyc), 3594 (nsec) 00:05:03.908 00:05:03.908 real 0m1.323s 00:05:03.908 user 0m1.237s 00:05:03.908 sys 0m0.079s 00:05:03.908 15:28:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.908 15:28:42 -- common/autotest_common.sh@10 -- # set +x 00:05:03.908 ************************************ 00:05:03.908 END TEST thread_poller_perf 00:05:03.908 ************************************ 00:05:03.908 15:28:42 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:03.908 15:28:42 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:03.908 15:28:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:03.908 15:28:42 -- common/autotest_common.sh@10 -- # set +x 00:05:03.908 ************************************ 00:05:03.908 START TEST thread_poller_perf 00:05:03.908 ************************************ 00:05:03.908 15:28:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:03.908 [2024-07-10 15:28:42.980737] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:03.908 [2024-07-10 15:28:42.980813] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001082 ] 00:05:03.908 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.908 [2024-07-10 15:28:43.041852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.908 [2024-07-10 15:28:43.158395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.908 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:05.282 ====================================== 00:05:05.282 busy:2703611180 (cyc) 00:05:05.282 total_run_count: 3828000 00:05:05.282 tsc_hz: 2700000000 (cyc) 00:05:05.282 ====================================== 00:05:05.282 poller_cost: 706 (cyc), 261 (nsec) 00:05:05.282 00:05:05.282 real 0m1.317s 00:05:05.282 user 0m1.227s 00:05:05.282 sys 0m0.083s 00:05:05.282 15:28:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.282 15:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:05.282 ************************************ 00:05:05.282 END TEST thread_poller_perf 00:05:05.282 ************************************ 00:05:05.282 15:28:44 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:05.282 00:05:05.282 real 0m2.740s 00:05:05.282 user 0m2.514s 00:05:05.282 sys 0m0.226s 00:05:05.282 15:28:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.282 15:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:05.282 ************************************ 00:05:05.282 END TEST thread 00:05:05.282 ************************************ 00:05:05.282 15:28:44 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:05.282 15:28:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:05.282 15:28:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:05.282 15:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:05.282 ************************************ 00:05:05.282 START TEST accel 00:05:05.282 ************************************ 00:05:05.282 15:28:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:05:05.282 * Looking for test storage... 00:05:05.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:05:05.282 15:28:44 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:05:05.282 15:28:44 -- accel/accel.sh@74 -- # get_expected_opcs 00:05:05.282 15:28:44 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:05.282 15:28:44 -- accel/accel.sh@59 -- # spdk_tgt_pid=2001282 00:05:05.282 15:28:44 -- accel/accel.sh@60 -- # waitforlisten 2001282 00:05:05.282 15:28:44 -- common/autotest_common.sh@819 -- # '[' -z 2001282 ']' 00:05:05.282 15:28:44 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:05:05.282 15:28:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:05.282 15:28:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:05.282 15:28:44 -- accel/accel.sh@58 -- # build_accel_config 00:05:05.282 15:28:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:05.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:05.282 15:28:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:05.282 15:28:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:05.282 15:28:44 -- common/autotest_common.sh@10 -- # set +x 00:05:05.282 15:28:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.282 15:28:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.282 15:28:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:05.282 15:28:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:05.282 15:28:44 -- accel/accel.sh@41 -- # local IFS=, 00:05:05.282 15:28:44 -- accel/accel.sh@42 -- # jq -r . 00:05:05.282 [2024-07-10 15:28:44.422373] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:05.282 [2024-07-10 15:28:44.422483] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001282 ] 00:05:05.282 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.282 [2024-07-10 15:28:44.483350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.282 [2024-07-10 15:28:44.593958] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:05.282 [2024-07-10 15:28:44.594119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.217 15:28:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:06.217 15:28:45 -- common/autotest_common.sh@852 -- # return 0 00:05:06.217 15:28:45 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:05:06.217 15:28:45 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:05:06.217 15:28:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:06.217 15:28:45 -- common/autotest_common.sh@10 -- # set +x 00:05:06.217 15:28:45 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:05:06.217 15:28:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:06.217 15:28:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # IFS== 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.217 15:28:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.217 15:28:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # IFS== 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.217 15:28:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.217 15:28:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # IFS== 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.217 15:28:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.217 15:28:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # IFS== 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.217 15:28:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.217 15:28:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # IFS== 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.217 15:28:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.217 15:28:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # IFS== 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.217 15:28:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.217 15:28:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # IFS== 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.217 15:28:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.217 15:28:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # IFS== 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.217 15:28:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.217 15:28:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # IFS== 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.217 15:28:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.217 15:28:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # IFS== 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.217 15:28:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.217 15:28:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # IFS== 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.217 15:28:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.217 15:28:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # IFS== 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.217 15:28:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.217 15:28:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # IFS== 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.217 15:28:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.217 15:28:45 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # IFS== 00:05:06.217 15:28:45 -- accel/accel.sh@64 -- # read -r opc module 00:05:06.217 15:28:45 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:05:06.217 15:28:45 -- accel/accel.sh@67 -- # killprocess 2001282 00:05:06.217 15:28:45 -- common/autotest_common.sh@926 -- # '[' -z 2001282 ']' 00:05:06.217 15:28:45 -- common/autotest_common.sh@930 -- # kill -0 2001282 00:05:06.217 15:28:45 -- common/autotest_common.sh@931 -- # uname 00:05:06.217 15:28:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:06.217 15:28:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2001282 00:05:06.217 15:28:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:06.217 15:28:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:06.217 15:28:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2001282' 00:05:06.217 killing process with pid 2001282 00:05:06.217 15:28:45 -- common/autotest_common.sh@945 -- # kill 2001282 00:05:06.217 15:28:45 -- common/autotest_common.sh@950 -- # wait 2001282 00:05:06.828 15:28:45 -- accel/accel.sh@68 -- # trap - ERR 00:05:06.828 15:28:45 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:05:06.828 15:28:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:05:06.828 15:28:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.828 15:28:45 -- common/autotest_common.sh@10 -- # set +x 00:05:06.828 15:28:45 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:05:06.828 15:28:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:05:06.828 15:28:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:06.828 15:28:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:06.828 15:28:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.828 15:28:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.828 15:28:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:06.828 15:28:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:06.828 15:28:45 -- accel/accel.sh@41 -- # local IFS=, 00:05:06.828 15:28:45 -- accel/accel.sh@42 -- # jq -r . 00:05:06.828 15:28:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.828 15:28:45 -- common/autotest_common.sh@10 -- # set +x 00:05:06.828 15:28:45 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:05:06.828 15:28:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:06.828 15:28:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:06.828 15:28:45 -- common/autotest_common.sh@10 -- # set +x 00:05:06.828 ************************************ 00:05:06.828 START TEST accel_missing_filename 00:05:06.828 ************************************ 00:05:06.828 15:28:45 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:05:06.828 15:28:45 -- common/autotest_common.sh@640 -- # local es=0 00:05:06.828 15:28:45 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:05:06.828 15:28:45 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:06.828 15:28:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:06.828 15:28:45 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:06.828 15:28:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:06.828 15:28:45 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:05:06.828 15:28:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:05:06.828 15:28:45 -- accel/accel.sh@12 -- # build_accel_config 00:05:06.828 15:28:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:06.828 15:28:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:06.828 15:28:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:06.828 15:28:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:06.828 15:28:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:06.828 15:28:45 -- accel/accel.sh@41 -- # local IFS=, 00:05:06.828 15:28:45 -- accel/accel.sh@42 -- # jq -r . 00:05:06.828 [2024-07-10 15:28:45.941632] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:06.828 [2024-07-10 15:28:45.941743] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001583 ] 00:05:06.828 EAL: No free 2048 kB hugepages reported on node 1 00:05:06.828 [2024-07-10 15:28:46.000171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:06.828 [2024-07-10 15:28:46.118486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.828 [2024-07-10 15:28:46.180153] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:07.087 [2024-07-10 15:28:46.264500] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:07.087 A filename is required. 00:05:07.087 15:28:46 -- common/autotest_common.sh@643 -- # es=234 00:05:07.087 15:28:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:07.087 15:28:46 -- common/autotest_common.sh@652 -- # es=106 00:05:07.087 15:28:46 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:07.087 15:28:46 -- common/autotest_common.sh@660 -- # es=1 00:05:07.087 15:28:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:07.087 00:05:07.087 real 0m0.465s 00:05:07.087 user 0m0.355s 00:05:07.087 sys 0m0.140s 00:05:07.087 15:28:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.087 15:28:46 -- common/autotest_common.sh@10 -- # set +x 00:05:07.087 ************************************ 00:05:07.087 END TEST accel_missing_filename 00:05:07.087 ************************************ 00:05:07.087 15:28:46 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:07.087 15:28:46 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:07.087 15:28:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.087 15:28:46 -- common/autotest_common.sh@10 -- # set +x 00:05:07.087 ************************************ 00:05:07.087 START TEST accel_compress_verify 00:05:07.087 ************************************ 00:05:07.087 15:28:46 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:07.087 15:28:46 -- common/autotest_common.sh@640 -- # local es=0 00:05:07.087 15:28:46 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:07.087 15:28:46 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:07.087 15:28:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:07.087 15:28:46 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:07.087 15:28:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:07.087 15:28:46 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:07.087 15:28:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:07.087 15:28:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:07.087 15:28:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:07.087 15:28:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.087 15:28:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.087 15:28:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:07.087 15:28:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:07.087 15:28:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:07.087 15:28:46 -- accel/accel.sh@42 -- # jq -r . 00:05:07.087 [2024-07-10 15:28:46.435759] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:07.087 [2024-07-10 15:28:46.435833] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001603 ] 00:05:07.087 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.346 [2024-07-10 15:28:46.498762] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.346 [2024-07-10 15:28:46.612807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.346 [2024-07-10 15:28:46.672948] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:07.605 [2024-07-10 15:28:46.751885] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:05:07.605 00:05:07.605 Compression does not support the verify option, aborting. 00:05:07.605 15:28:46 -- common/autotest_common.sh@643 -- # es=161 00:05:07.605 15:28:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:07.605 15:28:46 -- common/autotest_common.sh@652 -- # es=33 00:05:07.605 15:28:46 -- common/autotest_common.sh@653 -- # case "$es" in 00:05:07.605 15:28:46 -- common/autotest_common.sh@660 -- # es=1 00:05:07.605 15:28:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:07.605 00:05:07.605 real 0m0.460s 00:05:07.605 user 0m0.357s 00:05:07.605 sys 0m0.137s 00:05:07.605 15:28:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.605 15:28:46 -- common/autotest_common.sh@10 -- # set +x 00:05:07.605 ************************************ 00:05:07.605 END TEST accel_compress_verify 00:05:07.605 ************************************ 00:05:07.605 15:28:46 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:05:07.605 15:28:46 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:07.605 15:28:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.605 15:28:46 -- common/autotest_common.sh@10 -- # set +x 00:05:07.605 ************************************ 00:05:07.605 START TEST accel_wrong_workload 00:05:07.605 ************************************ 00:05:07.605 15:28:46 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:05:07.605 15:28:46 -- common/autotest_common.sh@640 -- # local es=0 00:05:07.605 15:28:46 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:05:07.605 15:28:46 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:07.605 15:28:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:07.605 15:28:46 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:07.605 15:28:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:07.605 15:28:46 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:05:07.605 15:28:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:05:07.605 15:28:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:07.605 15:28:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:07.605 15:28:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.605 15:28:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.605 15:28:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:07.605 15:28:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:07.605 15:28:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:07.605 15:28:46 -- accel/accel.sh@42 -- # jq -r . 00:05:07.605 Unsupported workload type: foobar 00:05:07.605 [2024-07-10 15:28:46.920377] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:05:07.605 accel_perf options: 00:05:07.605 [-h help message] 00:05:07.605 [-q queue depth per core] 00:05:07.605 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:07.605 [-T number of threads per core 00:05:07.605 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:07.605 [-t time in seconds] 00:05:07.605 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:07.605 [ dif_verify, , dif_generate, dif_generate_copy 00:05:07.605 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:07.605 [-l for compress/decompress workloads, name of uncompressed input file 00:05:07.605 [-S for crc32c workload, use this seed value (default 0) 00:05:07.605 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:07.605 [-f for fill workload, use this BYTE value (default 255) 00:05:07.605 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:07.605 [-y verify result if this switch is on] 00:05:07.605 [-a tasks to allocate per core (default: same value as -q)] 00:05:07.605 Can be used to spread operations across a wider range of memory. 00:05:07.605 15:28:46 -- common/autotest_common.sh@643 -- # es=1 00:05:07.605 15:28:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:07.605 15:28:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:07.605 15:28:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:07.605 00:05:07.605 real 0m0.024s 00:05:07.605 user 0m0.014s 00:05:07.605 sys 0m0.011s 00:05:07.605 15:28:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.605 15:28:46 -- common/autotest_common.sh@10 -- # set +x 00:05:07.605 ************************************ 00:05:07.605 END TEST accel_wrong_workload 00:05:07.605 ************************************ 00:05:07.605 Error: writing output failed: Broken pipe 00:05:07.605 15:28:46 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:05:07.605 15:28:46 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:05:07.605 15:28:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.605 15:28:46 -- common/autotest_common.sh@10 -- # set +x 00:05:07.605 ************************************ 00:05:07.605 START TEST accel_negative_buffers 00:05:07.605 ************************************ 00:05:07.605 15:28:46 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:05:07.605 15:28:46 -- common/autotest_common.sh@640 -- # local es=0 00:05:07.605 15:28:46 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:05:07.605 15:28:46 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:05:07.605 15:28:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:07.605 15:28:46 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:05:07.605 15:28:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:05:07.605 15:28:46 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:05:07.605 15:28:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:05:07.605 15:28:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:07.605 15:28:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:07.605 15:28:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.605 15:28:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.605 15:28:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:07.605 15:28:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:07.605 15:28:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:07.605 15:28:46 -- accel/accel.sh@42 -- # jq -r . 00:05:07.605 -x option must be non-negative. 00:05:07.605 [2024-07-10 15:28:46.962303] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:05:07.605 accel_perf options: 00:05:07.605 [-h help message] 00:05:07.605 [-q queue depth per core] 00:05:07.605 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:05:07.605 [-T number of threads per core 00:05:07.605 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:05:07.605 [-t time in seconds] 00:05:07.605 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:05:07.605 [ dif_verify, , dif_generate, dif_generate_copy 00:05:07.605 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:05:07.605 [-l for compress/decompress workloads, name of uncompressed input file 00:05:07.605 [-S for crc32c workload, use this seed value (default 0) 00:05:07.605 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:05:07.605 [-f for fill workload, use this BYTE value (default 255) 00:05:07.605 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:05:07.605 [-y verify result if this switch is on] 00:05:07.605 [-a tasks to allocate per core (default: same value as -q)] 00:05:07.605 Can be used to spread operations across a wider range of memory. 00:05:07.605 15:28:46 -- common/autotest_common.sh@643 -- # es=1 00:05:07.605 15:28:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:05:07.605 15:28:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:05:07.605 15:28:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:05:07.605 00:05:07.605 real 0m0.019s 00:05:07.605 user 0m0.013s 00:05:07.605 sys 0m0.006s 00:05:07.605 15:28:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.605 15:28:46 -- common/autotest_common.sh@10 -- # set +x 00:05:07.605 ************************************ 00:05:07.605 END TEST accel_negative_buffers 00:05:07.605 ************************************ 00:05:07.864 Error: writing output failed: Broken pipe 00:05:07.864 15:28:46 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:05:07.864 15:28:46 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:07.864 15:28:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:07.864 15:28:46 -- common/autotest_common.sh@10 -- # set +x 00:05:07.864 ************************************ 00:05:07.864 START TEST accel_crc32c 00:05:07.864 ************************************ 00:05:07.864 15:28:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:05:07.864 15:28:46 -- accel/accel.sh@16 -- # local accel_opc 00:05:07.864 15:28:46 -- accel/accel.sh@17 -- # local accel_module 00:05:07.864 15:28:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:07.864 15:28:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:07.864 15:28:46 -- accel/accel.sh@12 -- # build_accel_config 00:05:07.864 15:28:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:07.864 15:28:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.864 15:28:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.864 15:28:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:07.864 15:28:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:07.864 15:28:46 -- accel/accel.sh@41 -- # local IFS=, 00:05:07.864 15:28:46 -- accel/accel.sh@42 -- # jq -r . 00:05:07.864 [2024-07-10 15:28:47.007751] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:07.864 [2024-07-10 15:28:47.007823] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001784 ] 00:05:07.864 EAL: No free 2048 kB hugepages reported on node 1 00:05:07.864 [2024-07-10 15:28:47.070085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.864 [2024-07-10 15:28:47.185410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.238 15:28:48 -- accel/accel.sh@18 -- # out=' 00:05:09.238 SPDK Configuration: 00:05:09.238 Core mask: 0x1 00:05:09.238 00:05:09.238 Accel Perf Configuration: 00:05:09.238 Workload Type: crc32c 00:05:09.238 CRC-32C seed: 32 00:05:09.238 Transfer size: 4096 bytes 00:05:09.238 Vector count 1 00:05:09.238 Module: software 00:05:09.238 Queue depth: 32 00:05:09.238 Allocate depth: 32 00:05:09.238 # threads/core: 1 00:05:09.238 Run time: 1 seconds 00:05:09.238 Verify: Yes 00:05:09.238 00:05:09.238 Running for 1 seconds... 00:05:09.238 00:05:09.238 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:09.238 ------------------------------------------------------------------------------------ 00:05:09.238 0,0 404768/s 1581 MiB/s 0 0 00:05:09.238 ==================================================================================== 00:05:09.238 Total 404768/s 1581 MiB/s 0 0' 00:05:09.238 15:28:48 -- accel/accel.sh@20 -- # IFS=: 00:05:09.238 15:28:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:05:09.238 15:28:48 -- accel/accel.sh@20 -- # read -r var val 00:05:09.238 15:28:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:05:09.238 15:28:48 -- accel/accel.sh@12 -- # build_accel_config 00:05:09.238 15:28:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:09.239 15:28:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.239 15:28:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.239 15:28:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:09.239 15:28:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:09.239 15:28:48 -- accel/accel.sh@41 -- # local IFS=, 00:05:09.239 15:28:48 -- accel/accel.sh@42 -- # jq -r . 00:05:09.239 [2024-07-10 15:28:48.473577] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:09.239 [2024-07-10 15:28:48.473655] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001931 ] 00:05:09.239 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.239 [2024-07-10 15:28:48.533954] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.497 [2024-07-10 15:28:48.651303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.497 15:28:48 -- accel/accel.sh@21 -- # val= 00:05:09.497 15:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # IFS=: 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # read -r var val 00:05:09.497 15:28:48 -- accel/accel.sh@21 -- # val= 00:05:09.497 15:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # IFS=: 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # read -r var val 00:05:09.497 15:28:48 -- accel/accel.sh@21 -- # val=0x1 00:05:09.497 15:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # IFS=: 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # read -r var val 00:05:09.497 15:28:48 -- accel/accel.sh@21 -- # val= 00:05:09.497 15:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # IFS=: 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # read -r var val 00:05:09.497 15:28:48 -- accel/accel.sh@21 -- # val= 00:05:09.497 15:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # IFS=: 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # read -r var val 00:05:09.497 15:28:48 -- accel/accel.sh@21 -- # val=crc32c 00:05:09.497 15:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.497 15:28:48 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # IFS=: 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # read -r var val 00:05:09.497 15:28:48 -- accel/accel.sh@21 -- # val=32 00:05:09.497 15:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # IFS=: 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # read -r var val 00:05:09.497 15:28:48 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:09.497 15:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # IFS=: 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # read -r var val 00:05:09.497 15:28:48 -- accel/accel.sh@21 -- # val= 00:05:09.497 15:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # IFS=: 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # read -r var val 00:05:09.497 15:28:48 -- accel/accel.sh@21 -- # val=software 00:05:09.497 15:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.497 15:28:48 -- accel/accel.sh@23 -- # accel_module=software 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # IFS=: 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # read -r var val 00:05:09.497 15:28:48 -- accel/accel.sh@21 -- # val=32 00:05:09.497 15:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # IFS=: 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # read -r var val 00:05:09.497 15:28:48 -- accel/accel.sh@21 -- # val=32 00:05:09.497 15:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # IFS=: 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # read -r var val 00:05:09.497 15:28:48 -- accel/accel.sh@21 -- # val=1 00:05:09.497 15:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # IFS=: 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # read -r var val 00:05:09.497 15:28:48 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:09.497 15:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # IFS=: 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # read -r var val 00:05:09.497 15:28:48 -- accel/accel.sh@21 -- # val=Yes 00:05:09.497 15:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # IFS=: 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # read -r var val 00:05:09.497 15:28:48 -- accel/accel.sh@21 -- # val= 00:05:09.497 15:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # IFS=: 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # read -r var val 00:05:09.497 15:28:48 -- accel/accel.sh@21 -- # val= 00:05:09.497 15:28:48 -- accel/accel.sh@22 -- # case "$var" in 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # IFS=: 00:05:09.497 15:28:48 -- accel/accel.sh@20 -- # read -r var val 00:05:10.870 15:28:49 -- accel/accel.sh@21 -- # val= 00:05:10.870 15:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.870 15:28:49 -- accel/accel.sh@20 -- # IFS=: 00:05:10.870 15:28:49 -- accel/accel.sh@20 -- # read -r var val 00:05:10.870 15:28:49 -- accel/accel.sh@21 -- # val= 00:05:10.870 15:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.870 15:28:49 -- accel/accel.sh@20 -- # IFS=: 00:05:10.870 15:28:49 -- accel/accel.sh@20 -- # read -r var val 00:05:10.870 15:28:49 -- accel/accel.sh@21 -- # val= 00:05:10.870 15:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.870 15:28:49 -- accel/accel.sh@20 -- # IFS=: 00:05:10.870 15:28:49 -- accel/accel.sh@20 -- # read -r var val 00:05:10.870 15:28:49 -- accel/accel.sh@21 -- # val= 00:05:10.870 15:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.870 15:28:49 -- accel/accel.sh@20 -- # IFS=: 00:05:10.870 15:28:49 -- accel/accel.sh@20 -- # read -r var val 00:05:10.870 15:28:49 -- accel/accel.sh@21 -- # val= 00:05:10.870 15:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.870 15:28:49 -- accel/accel.sh@20 -- # IFS=: 00:05:10.870 15:28:49 -- accel/accel.sh@20 -- # read -r var val 00:05:10.870 15:28:49 -- accel/accel.sh@21 -- # val= 00:05:10.870 15:28:49 -- accel/accel.sh@22 -- # case "$var" in 00:05:10.870 15:28:49 -- accel/accel.sh@20 -- # IFS=: 00:05:10.870 15:28:49 -- accel/accel.sh@20 -- # read -r var val 00:05:10.870 15:28:49 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:10.870 15:28:49 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:10.870 15:28:49 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:10.870 00:05:10.870 real 0m2.943s 00:05:10.870 user 0m2.649s 00:05:10.870 sys 0m0.288s 00:05:10.870 15:28:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.870 15:28:49 -- common/autotest_common.sh@10 -- # set +x 00:05:10.870 ************************************ 00:05:10.870 END TEST accel_crc32c 00:05:10.870 ************************************ 00:05:10.870 15:28:49 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:05:10.870 15:28:49 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:10.870 15:28:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:10.870 15:28:49 -- common/autotest_common.sh@10 -- # set +x 00:05:10.870 ************************************ 00:05:10.870 START TEST accel_crc32c_C2 00:05:10.870 ************************************ 00:05:10.870 15:28:49 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:05:10.870 15:28:49 -- accel/accel.sh@16 -- # local accel_opc 00:05:10.870 15:28:49 -- accel/accel.sh@17 -- # local accel_module 00:05:10.870 15:28:49 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:10.870 15:28:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:10.870 15:28:49 -- accel/accel.sh@12 -- # build_accel_config 00:05:10.870 15:28:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:10.870 15:28:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:10.870 15:28:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:10.870 15:28:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:10.870 15:28:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:10.870 15:28:49 -- accel/accel.sh@41 -- # local IFS=, 00:05:10.870 15:28:49 -- accel/accel.sh@42 -- # jq -r . 00:05:10.870 [2024-07-10 15:28:49.972820] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:10.870 [2024-07-10 15:28:49.972886] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2002090 ] 00:05:10.870 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.870 [2024-07-10 15:28:50.037869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.870 [2024-07-10 15:28:50.155945] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.242 15:28:51 -- accel/accel.sh@18 -- # out=' 00:05:12.242 SPDK Configuration: 00:05:12.242 Core mask: 0x1 00:05:12.242 00:05:12.242 Accel Perf Configuration: 00:05:12.242 Workload Type: crc32c 00:05:12.242 CRC-32C seed: 0 00:05:12.242 Transfer size: 4096 bytes 00:05:12.242 Vector count 2 00:05:12.242 Module: software 00:05:12.242 Queue depth: 32 00:05:12.242 Allocate depth: 32 00:05:12.242 # threads/core: 1 00:05:12.242 Run time: 1 seconds 00:05:12.242 Verify: Yes 00:05:12.242 00:05:12.242 Running for 1 seconds... 00:05:12.242 00:05:12.242 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:12.242 ------------------------------------------------------------------------------------ 00:05:12.242 0,0 311264/s 2431 MiB/s 0 0 00:05:12.242 ==================================================================================== 00:05:12.242 Total 311264/s 1215 MiB/s 0 0' 00:05:12.242 15:28:51 -- accel/accel.sh@20 -- # IFS=: 00:05:12.242 15:28:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:05:12.242 15:28:51 -- accel/accel.sh@20 -- # read -r var val 00:05:12.242 15:28:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:05:12.242 15:28:51 -- accel/accel.sh@12 -- # build_accel_config 00:05:12.242 15:28:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:12.242 15:28:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:12.242 15:28:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:12.242 15:28:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:12.242 15:28:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:12.242 15:28:51 -- accel/accel.sh@41 -- # local IFS=, 00:05:12.242 15:28:51 -- accel/accel.sh@42 -- # jq -r . 00:05:12.242 [2024-07-10 15:28:51.441103] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:12.242 [2024-07-10 15:28:51.441182] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2002349 ] 00:05:12.242 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.242 [2024-07-10 15:28:51.503629] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.500 [2024-07-10 15:28:51.622674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.500 15:28:51 -- accel/accel.sh@21 -- # val= 00:05:12.500 15:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # IFS=: 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # read -r var val 00:05:12.500 15:28:51 -- accel/accel.sh@21 -- # val= 00:05:12.500 15:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # IFS=: 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # read -r var val 00:05:12.500 15:28:51 -- accel/accel.sh@21 -- # val=0x1 00:05:12.500 15:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # IFS=: 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # read -r var val 00:05:12.500 15:28:51 -- accel/accel.sh@21 -- # val= 00:05:12.500 15:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # IFS=: 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # read -r var val 00:05:12.500 15:28:51 -- accel/accel.sh@21 -- # val= 00:05:12.500 15:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # IFS=: 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # read -r var val 00:05:12.500 15:28:51 -- accel/accel.sh@21 -- # val=crc32c 00:05:12.500 15:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.500 15:28:51 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # IFS=: 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # read -r var val 00:05:12.500 15:28:51 -- accel/accel.sh@21 -- # val=0 00:05:12.500 15:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # IFS=: 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # read -r var val 00:05:12.500 15:28:51 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:12.500 15:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # IFS=: 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # read -r var val 00:05:12.500 15:28:51 -- accel/accel.sh@21 -- # val= 00:05:12.500 15:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # IFS=: 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # read -r var val 00:05:12.500 15:28:51 -- accel/accel.sh@21 -- # val=software 00:05:12.500 15:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.500 15:28:51 -- accel/accel.sh@23 -- # accel_module=software 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # IFS=: 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # read -r var val 00:05:12.500 15:28:51 -- accel/accel.sh@21 -- # val=32 00:05:12.500 15:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # IFS=: 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # read -r var val 00:05:12.500 15:28:51 -- accel/accel.sh@21 -- # val=32 00:05:12.500 15:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # IFS=: 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # read -r var val 00:05:12.500 15:28:51 -- accel/accel.sh@21 -- # val=1 00:05:12.500 15:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # IFS=: 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # read -r var val 00:05:12.500 15:28:51 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:12.500 15:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # IFS=: 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # read -r var val 00:05:12.500 15:28:51 -- accel/accel.sh@21 -- # val=Yes 00:05:12.500 15:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # IFS=: 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # read -r var val 00:05:12.500 15:28:51 -- accel/accel.sh@21 -- # val= 00:05:12.500 15:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # IFS=: 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # read -r var val 00:05:12.500 15:28:51 -- accel/accel.sh@21 -- # val= 00:05:12.500 15:28:51 -- accel/accel.sh@22 -- # case "$var" in 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # IFS=: 00:05:12.500 15:28:51 -- accel/accel.sh@20 -- # read -r var val 00:05:13.878 15:28:52 -- accel/accel.sh@21 -- # val= 00:05:13.878 15:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.878 15:28:52 -- accel/accel.sh@20 -- # IFS=: 00:05:13.878 15:28:52 -- accel/accel.sh@20 -- # read -r var val 00:05:13.878 15:28:52 -- accel/accel.sh@21 -- # val= 00:05:13.879 15:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.879 15:28:52 -- accel/accel.sh@20 -- # IFS=: 00:05:13.879 15:28:52 -- accel/accel.sh@20 -- # read -r var val 00:05:13.879 15:28:52 -- accel/accel.sh@21 -- # val= 00:05:13.879 15:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.879 15:28:52 -- accel/accel.sh@20 -- # IFS=: 00:05:13.879 15:28:52 -- accel/accel.sh@20 -- # read -r var val 00:05:13.879 15:28:52 -- accel/accel.sh@21 -- # val= 00:05:13.879 15:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.879 15:28:52 -- accel/accel.sh@20 -- # IFS=: 00:05:13.879 15:28:52 -- accel/accel.sh@20 -- # read -r var val 00:05:13.879 15:28:52 -- accel/accel.sh@21 -- # val= 00:05:13.879 15:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.879 15:28:52 -- accel/accel.sh@20 -- # IFS=: 00:05:13.879 15:28:52 -- accel/accel.sh@20 -- # read -r var val 00:05:13.879 15:28:52 -- accel/accel.sh@21 -- # val= 00:05:13.879 15:28:52 -- accel/accel.sh@22 -- # case "$var" in 00:05:13.879 15:28:52 -- accel/accel.sh@20 -- # IFS=: 00:05:13.879 15:28:52 -- accel/accel.sh@20 -- # read -r var val 00:05:13.879 15:28:52 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:13.879 15:28:52 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:05:13.879 15:28:52 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:13.879 00:05:13.879 real 0m2.931s 00:05:13.879 user 0m2.639s 00:05:13.879 sys 0m0.284s 00:05:13.879 15:28:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.879 15:28:52 -- common/autotest_common.sh@10 -- # set +x 00:05:13.879 ************************************ 00:05:13.879 END TEST accel_crc32c_C2 00:05:13.879 ************************************ 00:05:13.879 15:28:52 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:13.879 15:28:52 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:13.879 15:28:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.879 15:28:52 -- common/autotest_common.sh@10 -- # set +x 00:05:13.879 ************************************ 00:05:13.879 START TEST accel_copy 00:05:13.879 ************************************ 00:05:13.879 15:28:52 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:05:13.879 15:28:52 -- accel/accel.sh@16 -- # local accel_opc 00:05:13.879 15:28:52 -- accel/accel.sh@17 -- # local accel_module 00:05:13.879 15:28:52 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:05:13.879 15:28:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:13.879 15:28:52 -- accel/accel.sh@12 -- # build_accel_config 00:05:13.879 15:28:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:13.879 15:28:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.879 15:28:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.879 15:28:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:13.879 15:28:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:13.879 15:28:52 -- accel/accel.sh@41 -- # local IFS=, 00:05:13.879 15:28:52 -- accel/accel.sh@42 -- # jq -r . 00:05:13.879 [2024-07-10 15:28:52.931504] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:13.879 [2024-07-10 15:28:52.931578] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2002518 ] 00:05:13.879 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.879 [2024-07-10 15:28:52.994176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.879 [2024-07-10 15:28:53.111017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.252 15:28:54 -- accel/accel.sh@18 -- # out=' 00:05:15.252 SPDK Configuration: 00:05:15.252 Core mask: 0x1 00:05:15.252 00:05:15.252 Accel Perf Configuration: 00:05:15.252 Workload Type: copy 00:05:15.252 Transfer size: 4096 bytes 00:05:15.252 Vector count 1 00:05:15.252 Module: software 00:05:15.252 Queue depth: 32 00:05:15.252 Allocate depth: 32 00:05:15.252 # threads/core: 1 00:05:15.252 Run time: 1 seconds 00:05:15.252 Verify: Yes 00:05:15.252 00:05:15.252 Running for 1 seconds... 00:05:15.252 00:05:15.252 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:15.252 ------------------------------------------------------------------------------------ 00:05:15.252 0,0 278496/s 1087 MiB/s 0 0 00:05:15.252 ==================================================================================== 00:05:15.252 Total 278496/s 1087 MiB/s 0 0' 00:05:15.252 15:28:54 -- accel/accel.sh@20 -- # IFS=: 00:05:15.252 15:28:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:15.252 15:28:54 -- accel/accel.sh@20 -- # read -r var val 00:05:15.253 15:28:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:05:15.253 15:28:54 -- accel/accel.sh@12 -- # build_accel_config 00:05:15.253 15:28:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:15.253 15:28:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:15.253 15:28:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:15.253 15:28:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:15.253 15:28:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:15.253 15:28:54 -- accel/accel.sh@41 -- # local IFS=, 00:05:15.253 15:28:54 -- accel/accel.sh@42 -- # jq -r . 00:05:15.253 [2024-07-10 15:28:54.404874] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:15.253 [2024-07-10 15:28:54.404951] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2002658 ] 00:05:15.253 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.253 [2024-07-10 15:28:54.465714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.253 [2024-07-10 15:28:54.583056] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.511 15:28:54 -- accel/accel.sh@21 -- # val= 00:05:15.511 15:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # IFS=: 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # read -r var val 00:05:15.511 15:28:54 -- accel/accel.sh@21 -- # val= 00:05:15.511 15:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # IFS=: 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # read -r var val 00:05:15.511 15:28:54 -- accel/accel.sh@21 -- # val=0x1 00:05:15.511 15:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # IFS=: 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # read -r var val 00:05:15.511 15:28:54 -- accel/accel.sh@21 -- # val= 00:05:15.511 15:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # IFS=: 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # read -r var val 00:05:15.511 15:28:54 -- accel/accel.sh@21 -- # val= 00:05:15.511 15:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # IFS=: 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # read -r var val 00:05:15.511 15:28:54 -- accel/accel.sh@21 -- # val=copy 00:05:15.511 15:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.511 15:28:54 -- accel/accel.sh@24 -- # accel_opc=copy 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # IFS=: 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # read -r var val 00:05:15.511 15:28:54 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:15.511 15:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # IFS=: 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # read -r var val 00:05:15.511 15:28:54 -- accel/accel.sh@21 -- # val= 00:05:15.511 15:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # IFS=: 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # read -r var val 00:05:15.511 15:28:54 -- accel/accel.sh@21 -- # val=software 00:05:15.511 15:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.511 15:28:54 -- accel/accel.sh@23 -- # accel_module=software 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # IFS=: 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # read -r var val 00:05:15.511 15:28:54 -- accel/accel.sh@21 -- # val=32 00:05:15.511 15:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # IFS=: 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # read -r var val 00:05:15.511 15:28:54 -- accel/accel.sh@21 -- # val=32 00:05:15.511 15:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # IFS=: 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # read -r var val 00:05:15.511 15:28:54 -- accel/accel.sh@21 -- # val=1 00:05:15.511 15:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # IFS=: 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # read -r var val 00:05:15.511 15:28:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:15.511 15:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # IFS=: 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # read -r var val 00:05:15.511 15:28:54 -- accel/accel.sh@21 -- # val=Yes 00:05:15.511 15:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # IFS=: 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # read -r var val 00:05:15.511 15:28:54 -- accel/accel.sh@21 -- # val= 00:05:15.511 15:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # IFS=: 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # read -r var val 00:05:15.511 15:28:54 -- accel/accel.sh@21 -- # val= 00:05:15.511 15:28:54 -- accel/accel.sh@22 -- # case "$var" in 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # IFS=: 00:05:15.511 15:28:54 -- accel/accel.sh@20 -- # read -r var val 00:05:16.885 15:28:55 -- accel/accel.sh@21 -- # val= 00:05:16.885 15:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.885 15:28:55 -- accel/accel.sh@20 -- # IFS=: 00:05:16.885 15:28:55 -- accel/accel.sh@20 -- # read -r var val 00:05:16.885 15:28:55 -- accel/accel.sh@21 -- # val= 00:05:16.885 15:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.885 15:28:55 -- accel/accel.sh@20 -- # IFS=: 00:05:16.885 15:28:55 -- accel/accel.sh@20 -- # read -r var val 00:05:16.885 15:28:55 -- accel/accel.sh@21 -- # val= 00:05:16.885 15:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.885 15:28:55 -- accel/accel.sh@20 -- # IFS=: 00:05:16.885 15:28:55 -- accel/accel.sh@20 -- # read -r var val 00:05:16.885 15:28:55 -- accel/accel.sh@21 -- # val= 00:05:16.885 15:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.885 15:28:55 -- accel/accel.sh@20 -- # IFS=: 00:05:16.885 15:28:55 -- accel/accel.sh@20 -- # read -r var val 00:05:16.885 15:28:55 -- accel/accel.sh@21 -- # val= 00:05:16.885 15:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.885 15:28:55 -- accel/accel.sh@20 -- # IFS=: 00:05:16.886 15:28:55 -- accel/accel.sh@20 -- # read -r var val 00:05:16.886 15:28:55 -- accel/accel.sh@21 -- # val= 00:05:16.886 15:28:55 -- accel/accel.sh@22 -- # case "$var" in 00:05:16.886 15:28:55 -- accel/accel.sh@20 -- # IFS=: 00:05:16.886 15:28:55 -- accel/accel.sh@20 -- # read -r var val 00:05:16.886 15:28:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:16.886 15:28:55 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:05:16.886 15:28:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:16.886 00:05:16.886 real 0m2.940s 00:05:16.886 user 0m2.647s 00:05:16.886 sys 0m0.285s 00:05:16.886 15:28:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.886 15:28:55 -- common/autotest_common.sh@10 -- # set +x 00:05:16.886 ************************************ 00:05:16.886 END TEST accel_copy 00:05:16.886 ************************************ 00:05:16.886 15:28:55 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:16.886 15:28:55 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:16.886 15:28:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.886 15:28:55 -- common/autotest_common.sh@10 -- # set +x 00:05:16.886 ************************************ 00:05:16.886 START TEST accel_fill 00:05:16.886 ************************************ 00:05:16.886 15:28:55 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:16.886 15:28:55 -- accel/accel.sh@16 -- # local accel_opc 00:05:16.886 15:28:55 -- accel/accel.sh@17 -- # local accel_module 00:05:16.886 15:28:55 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:16.886 15:28:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:16.886 15:28:55 -- accel/accel.sh@12 -- # build_accel_config 00:05:16.886 15:28:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:16.886 15:28:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.886 15:28:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.886 15:28:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:16.886 15:28:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:16.886 15:28:55 -- accel/accel.sh@41 -- # local IFS=, 00:05:16.886 15:28:55 -- accel/accel.sh@42 -- # jq -r . 00:05:16.886 [2024-07-10 15:28:55.898163] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:16.886 [2024-07-10 15:28:55.898253] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2002936 ] 00:05:16.886 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.886 [2024-07-10 15:28:55.961705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.886 [2024-07-10 15:28:56.078058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.258 15:28:57 -- accel/accel.sh@18 -- # out=' 00:05:18.258 SPDK Configuration: 00:05:18.258 Core mask: 0x1 00:05:18.258 00:05:18.258 Accel Perf Configuration: 00:05:18.258 Workload Type: fill 00:05:18.258 Fill pattern: 0x80 00:05:18.258 Transfer size: 4096 bytes 00:05:18.258 Vector count 1 00:05:18.258 Module: software 00:05:18.258 Queue depth: 64 00:05:18.258 Allocate depth: 64 00:05:18.258 # threads/core: 1 00:05:18.258 Run time: 1 seconds 00:05:18.258 Verify: Yes 00:05:18.258 00:05:18.258 Running for 1 seconds... 00:05:18.258 00:05:18.258 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:18.258 ------------------------------------------------------------------------------------ 00:05:18.258 0,0 402752/s 1573 MiB/s 0 0 00:05:18.258 ==================================================================================== 00:05:18.258 Total 402752/s 1573 MiB/s 0 0' 00:05:18.258 15:28:57 -- accel/accel.sh@20 -- # IFS=: 00:05:18.258 15:28:57 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:18.258 15:28:57 -- accel/accel.sh@20 -- # read -r var val 00:05:18.258 15:28:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:18.258 15:28:57 -- accel/accel.sh@12 -- # build_accel_config 00:05:18.258 15:28:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:18.258 15:28:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.258 15:28:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.258 15:28:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:18.258 15:28:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:18.258 15:28:57 -- accel/accel.sh@41 -- # local IFS=, 00:05:18.258 15:28:57 -- accel/accel.sh@42 -- # jq -r . 00:05:18.258 [2024-07-10 15:28:57.365144] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:18.258 [2024-07-10 15:28:57.365220] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2003084 ] 00:05:18.258 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.258 [2024-07-10 15:28:57.427100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.258 [2024-07-10 15:28:57.543914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.258 15:28:57 -- accel/accel.sh@21 -- # val= 00:05:18.258 15:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.258 15:28:57 -- accel/accel.sh@20 -- # IFS=: 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # read -r var val 00:05:18.259 15:28:57 -- accel/accel.sh@21 -- # val= 00:05:18.259 15:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # IFS=: 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # read -r var val 00:05:18.259 15:28:57 -- accel/accel.sh@21 -- # val=0x1 00:05:18.259 15:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # IFS=: 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # read -r var val 00:05:18.259 15:28:57 -- accel/accel.sh@21 -- # val= 00:05:18.259 15:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # IFS=: 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # read -r var val 00:05:18.259 15:28:57 -- accel/accel.sh@21 -- # val= 00:05:18.259 15:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # IFS=: 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # read -r var val 00:05:18.259 15:28:57 -- accel/accel.sh@21 -- # val=fill 00:05:18.259 15:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.259 15:28:57 -- accel/accel.sh@24 -- # accel_opc=fill 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # IFS=: 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # read -r var val 00:05:18.259 15:28:57 -- accel/accel.sh@21 -- # val=0x80 00:05:18.259 15:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # IFS=: 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # read -r var val 00:05:18.259 15:28:57 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:18.259 15:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # IFS=: 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # read -r var val 00:05:18.259 15:28:57 -- accel/accel.sh@21 -- # val= 00:05:18.259 15:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # IFS=: 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # read -r var val 00:05:18.259 15:28:57 -- accel/accel.sh@21 -- # val=software 00:05:18.259 15:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.259 15:28:57 -- accel/accel.sh@23 -- # accel_module=software 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # IFS=: 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # read -r var val 00:05:18.259 15:28:57 -- accel/accel.sh@21 -- # val=64 00:05:18.259 15:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # IFS=: 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # read -r var val 00:05:18.259 15:28:57 -- accel/accel.sh@21 -- # val=64 00:05:18.259 15:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # IFS=: 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # read -r var val 00:05:18.259 15:28:57 -- accel/accel.sh@21 -- # val=1 00:05:18.259 15:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # IFS=: 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # read -r var val 00:05:18.259 15:28:57 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:18.259 15:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # IFS=: 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # read -r var val 00:05:18.259 15:28:57 -- accel/accel.sh@21 -- # val=Yes 00:05:18.259 15:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # IFS=: 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # read -r var val 00:05:18.259 15:28:57 -- accel/accel.sh@21 -- # val= 00:05:18.259 15:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # IFS=: 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # read -r var val 00:05:18.259 15:28:57 -- accel/accel.sh@21 -- # val= 00:05:18.259 15:28:57 -- accel/accel.sh@22 -- # case "$var" in 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # IFS=: 00:05:18.259 15:28:57 -- accel/accel.sh@20 -- # read -r var val 00:05:19.633 15:28:58 -- accel/accel.sh@21 -- # val= 00:05:19.633 15:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.633 15:28:58 -- accel/accel.sh@20 -- # IFS=: 00:05:19.633 15:28:58 -- accel/accel.sh@20 -- # read -r var val 00:05:19.633 15:28:58 -- accel/accel.sh@21 -- # val= 00:05:19.633 15:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.633 15:28:58 -- accel/accel.sh@20 -- # IFS=: 00:05:19.633 15:28:58 -- accel/accel.sh@20 -- # read -r var val 00:05:19.633 15:28:58 -- accel/accel.sh@21 -- # val= 00:05:19.633 15:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.633 15:28:58 -- accel/accel.sh@20 -- # IFS=: 00:05:19.633 15:28:58 -- accel/accel.sh@20 -- # read -r var val 00:05:19.633 15:28:58 -- accel/accel.sh@21 -- # val= 00:05:19.633 15:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.633 15:28:58 -- accel/accel.sh@20 -- # IFS=: 00:05:19.633 15:28:58 -- accel/accel.sh@20 -- # read -r var val 00:05:19.633 15:28:58 -- accel/accel.sh@21 -- # val= 00:05:19.633 15:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.633 15:28:58 -- accel/accel.sh@20 -- # IFS=: 00:05:19.633 15:28:58 -- accel/accel.sh@20 -- # read -r var val 00:05:19.633 15:28:58 -- accel/accel.sh@21 -- # val= 00:05:19.633 15:28:58 -- accel/accel.sh@22 -- # case "$var" in 00:05:19.633 15:28:58 -- accel/accel.sh@20 -- # IFS=: 00:05:19.633 15:28:58 -- accel/accel.sh@20 -- # read -r var val 00:05:19.633 15:28:58 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:19.633 15:28:58 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:05:19.633 15:28:58 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:19.633 00:05:19.633 real 0m2.950s 00:05:19.633 user 0m2.653s 00:05:19.633 sys 0m0.289s 00:05:19.633 15:28:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.633 15:28:58 -- common/autotest_common.sh@10 -- # set +x 00:05:19.633 ************************************ 00:05:19.633 END TEST accel_fill 00:05:19.633 ************************************ 00:05:19.633 15:28:58 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:19.633 15:28:58 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:19.633 15:28:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.633 15:28:58 -- common/autotest_common.sh@10 -- # set +x 00:05:19.633 ************************************ 00:05:19.633 START TEST accel_copy_crc32c 00:05:19.633 ************************************ 00:05:19.633 15:28:58 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:05:19.633 15:28:58 -- accel/accel.sh@16 -- # local accel_opc 00:05:19.633 15:28:58 -- accel/accel.sh@17 -- # local accel_module 00:05:19.633 15:28:58 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:19.633 15:28:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:19.633 15:28:58 -- accel/accel.sh@12 -- # build_accel_config 00:05:19.633 15:28:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:19.633 15:28:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:19.633 15:28:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:19.633 15:28:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:19.633 15:28:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:19.633 15:28:58 -- accel/accel.sh@41 -- # local IFS=, 00:05:19.633 15:28:58 -- accel/accel.sh@42 -- # jq -r . 00:05:19.633 [2024-07-10 15:28:58.875918] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:19.633 [2024-07-10 15:28:58.875997] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2003245 ] 00:05:19.633 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.633 [2024-07-10 15:28:58.938211] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.891 [2024-07-10 15:28:59.060729] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.264 15:29:00 -- accel/accel.sh@18 -- # out=' 00:05:21.264 SPDK Configuration: 00:05:21.264 Core mask: 0x1 00:05:21.264 00:05:21.265 Accel Perf Configuration: 00:05:21.265 Workload Type: copy_crc32c 00:05:21.265 CRC-32C seed: 0 00:05:21.265 Vector size: 4096 bytes 00:05:21.265 Transfer size: 4096 bytes 00:05:21.265 Vector count 1 00:05:21.265 Module: software 00:05:21.265 Queue depth: 32 00:05:21.265 Allocate depth: 32 00:05:21.265 # threads/core: 1 00:05:21.265 Run time: 1 seconds 00:05:21.265 Verify: Yes 00:05:21.265 00:05:21.265 Running for 1 seconds... 00:05:21.265 00:05:21.265 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:21.265 ------------------------------------------------------------------------------------ 00:05:21.265 0,0 216960/s 847 MiB/s 0 0 00:05:21.265 ==================================================================================== 00:05:21.265 Total 216960/s 847 MiB/s 0 0' 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # IFS=: 00:05:21.265 15:29:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # read -r var val 00:05:21.265 15:29:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:05:21.265 15:29:00 -- accel/accel.sh@12 -- # build_accel_config 00:05:21.265 15:29:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:21.265 15:29:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:21.265 15:29:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:21.265 15:29:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:21.265 15:29:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:21.265 15:29:00 -- accel/accel.sh@41 -- # local IFS=, 00:05:21.265 15:29:00 -- accel/accel.sh@42 -- # jq -r . 00:05:21.265 [2024-07-10 15:29:00.360136] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:21.265 [2024-07-10 15:29:00.360229] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2003464 ] 00:05:21.265 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.265 [2024-07-10 15:29:00.422814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.265 [2024-07-10 15:29:00.543502] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.265 15:29:00 -- accel/accel.sh@21 -- # val= 00:05:21.265 15:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # IFS=: 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # read -r var val 00:05:21.265 15:29:00 -- accel/accel.sh@21 -- # val= 00:05:21.265 15:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # IFS=: 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # read -r var val 00:05:21.265 15:29:00 -- accel/accel.sh@21 -- # val=0x1 00:05:21.265 15:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # IFS=: 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # read -r var val 00:05:21.265 15:29:00 -- accel/accel.sh@21 -- # val= 00:05:21.265 15:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # IFS=: 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # read -r var val 00:05:21.265 15:29:00 -- accel/accel.sh@21 -- # val= 00:05:21.265 15:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # IFS=: 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # read -r var val 00:05:21.265 15:29:00 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:21.265 15:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.265 15:29:00 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # IFS=: 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # read -r var val 00:05:21.265 15:29:00 -- accel/accel.sh@21 -- # val=0 00:05:21.265 15:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # IFS=: 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # read -r var val 00:05:21.265 15:29:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:21.265 15:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # IFS=: 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # read -r var val 00:05:21.265 15:29:00 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:21.265 15:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # IFS=: 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # read -r var val 00:05:21.265 15:29:00 -- accel/accel.sh@21 -- # val= 00:05:21.265 15:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # IFS=: 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # read -r var val 00:05:21.265 15:29:00 -- accel/accel.sh@21 -- # val=software 00:05:21.265 15:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.265 15:29:00 -- accel/accel.sh@23 -- # accel_module=software 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # IFS=: 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # read -r var val 00:05:21.265 15:29:00 -- accel/accel.sh@21 -- # val=32 00:05:21.265 15:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # IFS=: 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # read -r var val 00:05:21.265 15:29:00 -- accel/accel.sh@21 -- # val=32 00:05:21.265 15:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # IFS=: 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # read -r var val 00:05:21.265 15:29:00 -- accel/accel.sh@21 -- # val=1 00:05:21.265 15:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # IFS=: 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # read -r var val 00:05:21.265 15:29:00 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:21.265 15:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # IFS=: 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # read -r var val 00:05:21.265 15:29:00 -- accel/accel.sh@21 -- # val=Yes 00:05:21.265 15:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # IFS=: 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # read -r var val 00:05:21.265 15:29:00 -- accel/accel.sh@21 -- # val= 00:05:21.265 15:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # IFS=: 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # read -r var val 00:05:21.265 15:29:00 -- accel/accel.sh@21 -- # val= 00:05:21.265 15:29:00 -- accel/accel.sh@22 -- # case "$var" in 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # IFS=: 00:05:21.265 15:29:00 -- accel/accel.sh@20 -- # read -r var val 00:05:22.640 15:29:01 -- accel/accel.sh@21 -- # val= 00:05:22.640 15:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.640 15:29:01 -- accel/accel.sh@20 -- # IFS=: 00:05:22.640 15:29:01 -- accel/accel.sh@20 -- # read -r var val 00:05:22.640 15:29:01 -- accel/accel.sh@21 -- # val= 00:05:22.640 15:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.640 15:29:01 -- accel/accel.sh@20 -- # IFS=: 00:05:22.640 15:29:01 -- accel/accel.sh@20 -- # read -r var val 00:05:22.640 15:29:01 -- accel/accel.sh@21 -- # val= 00:05:22.640 15:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.640 15:29:01 -- accel/accel.sh@20 -- # IFS=: 00:05:22.640 15:29:01 -- accel/accel.sh@20 -- # read -r var val 00:05:22.640 15:29:01 -- accel/accel.sh@21 -- # val= 00:05:22.640 15:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.640 15:29:01 -- accel/accel.sh@20 -- # IFS=: 00:05:22.640 15:29:01 -- accel/accel.sh@20 -- # read -r var val 00:05:22.640 15:29:01 -- accel/accel.sh@21 -- # val= 00:05:22.640 15:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.640 15:29:01 -- accel/accel.sh@20 -- # IFS=: 00:05:22.640 15:29:01 -- accel/accel.sh@20 -- # read -r var val 00:05:22.640 15:29:01 -- accel/accel.sh@21 -- # val= 00:05:22.640 15:29:01 -- accel/accel.sh@22 -- # case "$var" in 00:05:22.640 15:29:01 -- accel/accel.sh@20 -- # IFS=: 00:05:22.640 15:29:01 -- accel/accel.sh@20 -- # read -r var val 00:05:22.640 15:29:01 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:22.640 15:29:01 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:22.640 15:29:01 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:22.640 00:05:22.640 real 0m2.971s 00:05:22.640 user 0m2.672s 00:05:22.640 sys 0m0.291s 00:05:22.640 15:29:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.640 15:29:01 -- common/autotest_common.sh@10 -- # set +x 00:05:22.640 ************************************ 00:05:22.640 END TEST accel_copy_crc32c 00:05:22.640 ************************************ 00:05:22.640 15:29:01 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:22.640 15:29:01 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:22.640 15:29:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.640 15:29:01 -- common/autotest_common.sh@10 -- # set +x 00:05:22.640 ************************************ 00:05:22.640 START TEST accel_copy_crc32c_C2 00:05:22.640 ************************************ 00:05:22.640 15:29:01 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:22.640 15:29:01 -- accel/accel.sh@16 -- # local accel_opc 00:05:22.640 15:29:01 -- accel/accel.sh@17 -- # local accel_module 00:05:22.640 15:29:01 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:22.640 15:29:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:22.640 15:29:01 -- accel/accel.sh@12 -- # build_accel_config 00:05:22.640 15:29:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:22.640 15:29:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.640 15:29:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.640 15:29:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:22.640 15:29:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:22.640 15:29:01 -- accel/accel.sh@41 -- # local IFS=, 00:05:22.640 15:29:01 -- accel/accel.sh@42 -- # jq -r . 00:05:22.640 [2024-07-10 15:29:01.872193] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:22.640 [2024-07-10 15:29:01.872277] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2003666 ] 00:05:22.640 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.640 [2024-07-10 15:29:01.937635] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.898 [2024-07-10 15:29:02.057874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.275 15:29:03 -- accel/accel.sh@18 -- # out=' 00:05:24.275 SPDK Configuration: 00:05:24.275 Core mask: 0x1 00:05:24.275 00:05:24.275 Accel Perf Configuration: 00:05:24.275 Workload Type: copy_crc32c 00:05:24.275 CRC-32C seed: 0 00:05:24.275 Vector size: 4096 bytes 00:05:24.275 Transfer size: 8192 bytes 00:05:24.275 Vector count 2 00:05:24.275 Module: software 00:05:24.275 Queue depth: 32 00:05:24.275 Allocate depth: 32 00:05:24.275 # threads/core: 1 00:05:24.275 Run time: 1 seconds 00:05:24.275 Verify: Yes 00:05:24.275 00:05:24.275 Running for 1 seconds... 00:05:24.275 00:05:24.275 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:24.275 ------------------------------------------------------------------------------------ 00:05:24.276 0,0 154528/s 1207 MiB/s 0 0 00:05:24.276 ==================================================================================== 00:05:24.276 Total 154528/s 603 MiB/s 0 0' 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # IFS=: 00:05:24.276 15:29:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # read -r var val 00:05:24.276 15:29:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:05:24.276 15:29:03 -- accel/accel.sh@12 -- # build_accel_config 00:05:24.276 15:29:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:24.276 15:29:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.276 15:29:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.276 15:29:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:24.276 15:29:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:24.276 15:29:03 -- accel/accel.sh@41 -- # local IFS=, 00:05:24.276 15:29:03 -- accel/accel.sh@42 -- # jq -r . 00:05:24.276 [2024-07-10 15:29:03.361054] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:24.276 [2024-07-10 15:29:03.361135] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2003813 ] 00:05:24.276 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.276 [2024-07-10 15:29:03.423970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.276 [2024-07-10 15:29:03.543921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.276 15:29:03 -- accel/accel.sh@21 -- # val= 00:05:24.276 15:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # IFS=: 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # read -r var val 00:05:24.276 15:29:03 -- accel/accel.sh@21 -- # val= 00:05:24.276 15:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # IFS=: 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # read -r var val 00:05:24.276 15:29:03 -- accel/accel.sh@21 -- # val=0x1 00:05:24.276 15:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # IFS=: 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # read -r var val 00:05:24.276 15:29:03 -- accel/accel.sh@21 -- # val= 00:05:24.276 15:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # IFS=: 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # read -r var val 00:05:24.276 15:29:03 -- accel/accel.sh@21 -- # val= 00:05:24.276 15:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # IFS=: 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # read -r var val 00:05:24.276 15:29:03 -- accel/accel.sh@21 -- # val=copy_crc32c 00:05:24.276 15:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.276 15:29:03 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # IFS=: 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # read -r var val 00:05:24.276 15:29:03 -- accel/accel.sh@21 -- # val=0 00:05:24.276 15:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # IFS=: 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # read -r var val 00:05:24.276 15:29:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:24.276 15:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # IFS=: 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # read -r var val 00:05:24.276 15:29:03 -- accel/accel.sh@21 -- # val='8192 bytes' 00:05:24.276 15:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # IFS=: 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # read -r var val 00:05:24.276 15:29:03 -- accel/accel.sh@21 -- # val= 00:05:24.276 15:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # IFS=: 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # read -r var val 00:05:24.276 15:29:03 -- accel/accel.sh@21 -- # val=software 00:05:24.276 15:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.276 15:29:03 -- accel/accel.sh@23 -- # accel_module=software 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # IFS=: 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # read -r var val 00:05:24.276 15:29:03 -- accel/accel.sh@21 -- # val=32 00:05:24.276 15:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # IFS=: 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # read -r var val 00:05:24.276 15:29:03 -- accel/accel.sh@21 -- # val=32 00:05:24.276 15:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # IFS=: 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # read -r var val 00:05:24.276 15:29:03 -- accel/accel.sh@21 -- # val=1 00:05:24.276 15:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # IFS=: 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # read -r var val 00:05:24.276 15:29:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:24.276 15:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # IFS=: 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # read -r var val 00:05:24.276 15:29:03 -- accel/accel.sh@21 -- # val=Yes 00:05:24.276 15:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # IFS=: 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # read -r var val 00:05:24.276 15:29:03 -- accel/accel.sh@21 -- # val= 00:05:24.276 15:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # IFS=: 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # read -r var val 00:05:24.276 15:29:03 -- accel/accel.sh@21 -- # val= 00:05:24.276 15:29:03 -- accel/accel.sh@22 -- # case "$var" in 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # IFS=: 00:05:24.276 15:29:03 -- accel/accel.sh@20 -- # read -r var val 00:05:25.650 15:29:04 -- accel/accel.sh@21 -- # val= 00:05:25.650 15:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.650 15:29:04 -- accel/accel.sh@20 -- # IFS=: 00:05:25.650 15:29:04 -- accel/accel.sh@20 -- # read -r var val 00:05:25.650 15:29:04 -- accel/accel.sh@21 -- # val= 00:05:25.650 15:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.650 15:29:04 -- accel/accel.sh@20 -- # IFS=: 00:05:25.650 15:29:04 -- accel/accel.sh@20 -- # read -r var val 00:05:25.650 15:29:04 -- accel/accel.sh@21 -- # val= 00:05:25.650 15:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.650 15:29:04 -- accel/accel.sh@20 -- # IFS=: 00:05:25.650 15:29:04 -- accel/accel.sh@20 -- # read -r var val 00:05:25.650 15:29:04 -- accel/accel.sh@21 -- # val= 00:05:25.650 15:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.650 15:29:04 -- accel/accel.sh@20 -- # IFS=: 00:05:25.650 15:29:04 -- accel/accel.sh@20 -- # read -r var val 00:05:25.650 15:29:04 -- accel/accel.sh@21 -- # val= 00:05:25.650 15:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.650 15:29:04 -- accel/accel.sh@20 -- # IFS=: 00:05:25.650 15:29:04 -- accel/accel.sh@20 -- # read -r var val 00:05:25.650 15:29:04 -- accel/accel.sh@21 -- # val= 00:05:25.650 15:29:04 -- accel/accel.sh@22 -- # case "$var" in 00:05:25.650 15:29:04 -- accel/accel.sh@20 -- # IFS=: 00:05:25.650 15:29:04 -- accel/accel.sh@20 -- # read -r var val 00:05:25.650 15:29:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:25.650 15:29:04 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:05:25.650 15:29:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.650 00:05:25.650 real 0m2.974s 00:05:25.650 user 0m2.677s 00:05:25.650 sys 0m0.290s 00:05:25.650 15:29:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.650 15:29:04 -- common/autotest_common.sh@10 -- # set +x 00:05:25.650 ************************************ 00:05:25.650 END TEST accel_copy_crc32c_C2 00:05:25.650 ************************************ 00:05:25.650 15:29:04 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:25.650 15:29:04 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:25.650 15:29:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.650 15:29:04 -- common/autotest_common.sh@10 -- # set +x 00:05:25.650 ************************************ 00:05:25.650 START TEST accel_dualcast 00:05:25.650 ************************************ 00:05:25.650 15:29:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:05:25.650 15:29:04 -- accel/accel.sh@16 -- # local accel_opc 00:05:25.650 15:29:04 -- accel/accel.sh@17 -- # local accel_module 00:05:25.650 15:29:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:05:25.650 15:29:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:25.650 15:29:04 -- accel/accel.sh@12 -- # build_accel_config 00:05:25.650 15:29:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:25.650 15:29:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.650 15:29:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.650 15:29:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:25.650 15:29:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:25.650 15:29:04 -- accel/accel.sh@41 -- # local IFS=, 00:05:25.650 15:29:04 -- accel/accel.sh@42 -- # jq -r . 00:05:25.650 [2024-07-10 15:29:04.873672] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:25.650 [2024-07-10 15:29:04.873761] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004031 ] 00:05:25.650 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.650 [2024-07-10 15:29:04.935611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.908 [2024-07-10 15:29:05.057221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.281 15:29:06 -- accel/accel.sh@18 -- # out=' 00:05:27.282 SPDK Configuration: 00:05:27.282 Core mask: 0x1 00:05:27.282 00:05:27.282 Accel Perf Configuration: 00:05:27.282 Workload Type: dualcast 00:05:27.282 Transfer size: 4096 bytes 00:05:27.282 Vector count 1 00:05:27.282 Module: software 00:05:27.282 Queue depth: 32 00:05:27.282 Allocate depth: 32 00:05:27.282 # threads/core: 1 00:05:27.282 Run time: 1 seconds 00:05:27.282 Verify: Yes 00:05:27.282 00:05:27.282 Running for 1 seconds... 00:05:27.282 00:05:27.282 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:27.282 ------------------------------------------------------------------------------------ 00:05:27.282 0,0 297600/s 1162 MiB/s 0 0 00:05:27.282 ==================================================================================== 00:05:27.282 Total 297600/s 1162 MiB/s 0 0' 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # IFS=: 00:05:27.282 15:29:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # read -r var val 00:05:27.282 15:29:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:05:27.282 15:29:06 -- accel/accel.sh@12 -- # build_accel_config 00:05:27.282 15:29:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:27.282 15:29:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.282 15:29:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.282 15:29:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:27.282 15:29:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:27.282 15:29:06 -- accel/accel.sh@41 -- # local IFS=, 00:05:27.282 15:29:06 -- accel/accel.sh@42 -- # jq -r . 00:05:27.282 [2024-07-10 15:29:06.360523] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:27.282 [2024-07-10 15:29:06.360604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004231 ] 00:05:27.282 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.282 [2024-07-10 15:29:06.422960] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.282 [2024-07-10 15:29:06.542159] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.282 15:29:06 -- accel/accel.sh@21 -- # val= 00:05:27.282 15:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # IFS=: 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # read -r var val 00:05:27.282 15:29:06 -- accel/accel.sh@21 -- # val= 00:05:27.282 15:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # IFS=: 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # read -r var val 00:05:27.282 15:29:06 -- accel/accel.sh@21 -- # val=0x1 00:05:27.282 15:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # IFS=: 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # read -r var val 00:05:27.282 15:29:06 -- accel/accel.sh@21 -- # val= 00:05:27.282 15:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # IFS=: 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # read -r var val 00:05:27.282 15:29:06 -- accel/accel.sh@21 -- # val= 00:05:27.282 15:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # IFS=: 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # read -r var val 00:05:27.282 15:29:06 -- accel/accel.sh@21 -- # val=dualcast 00:05:27.282 15:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.282 15:29:06 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # IFS=: 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # read -r var val 00:05:27.282 15:29:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:27.282 15:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # IFS=: 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # read -r var val 00:05:27.282 15:29:06 -- accel/accel.sh@21 -- # val= 00:05:27.282 15:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # IFS=: 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # read -r var val 00:05:27.282 15:29:06 -- accel/accel.sh@21 -- # val=software 00:05:27.282 15:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.282 15:29:06 -- accel/accel.sh@23 -- # accel_module=software 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # IFS=: 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # read -r var val 00:05:27.282 15:29:06 -- accel/accel.sh@21 -- # val=32 00:05:27.282 15:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # IFS=: 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # read -r var val 00:05:27.282 15:29:06 -- accel/accel.sh@21 -- # val=32 00:05:27.282 15:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # IFS=: 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # read -r var val 00:05:27.282 15:29:06 -- accel/accel.sh@21 -- # val=1 00:05:27.282 15:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # IFS=: 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # read -r var val 00:05:27.282 15:29:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:27.282 15:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # IFS=: 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # read -r var val 00:05:27.282 15:29:06 -- accel/accel.sh@21 -- # val=Yes 00:05:27.282 15:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # IFS=: 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # read -r var val 00:05:27.282 15:29:06 -- accel/accel.sh@21 -- # val= 00:05:27.282 15:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # IFS=: 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # read -r var val 00:05:27.282 15:29:06 -- accel/accel.sh@21 -- # val= 00:05:27.282 15:29:06 -- accel/accel.sh@22 -- # case "$var" in 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # IFS=: 00:05:27.282 15:29:06 -- accel/accel.sh@20 -- # read -r var val 00:05:28.656 15:29:07 -- accel/accel.sh@21 -- # val= 00:05:28.656 15:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.656 15:29:07 -- accel/accel.sh@20 -- # IFS=: 00:05:28.656 15:29:07 -- accel/accel.sh@20 -- # read -r var val 00:05:28.656 15:29:07 -- accel/accel.sh@21 -- # val= 00:05:28.656 15:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.656 15:29:07 -- accel/accel.sh@20 -- # IFS=: 00:05:28.656 15:29:07 -- accel/accel.sh@20 -- # read -r var val 00:05:28.656 15:29:07 -- accel/accel.sh@21 -- # val= 00:05:28.657 15:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.657 15:29:07 -- accel/accel.sh@20 -- # IFS=: 00:05:28.657 15:29:07 -- accel/accel.sh@20 -- # read -r var val 00:05:28.657 15:29:07 -- accel/accel.sh@21 -- # val= 00:05:28.657 15:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.657 15:29:07 -- accel/accel.sh@20 -- # IFS=: 00:05:28.657 15:29:07 -- accel/accel.sh@20 -- # read -r var val 00:05:28.657 15:29:07 -- accel/accel.sh@21 -- # val= 00:05:28.657 15:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.657 15:29:07 -- accel/accel.sh@20 -- # IFS=: 00:05:28.657 15:29:07 -- accel/accel.sh@20 -- # read -r var val 00:05:28.657 15:29:07 -- accel/accel.sh@21 -- # val= 00:05:28.657 15:29:07 -- accel/accel.sh@22 -- # case "$var" in 00:05:28.657 15:29:07 -- accel/accel.sh@20 -- # IFS=: 00:05:28.657 15:29:07 -- accel/accel.sh@20 -- # read -r var val 00:05:28.657 15:29:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:28.657 15:29:07 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:05:28.657 15:29:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:28.657 00:05:28.657 real 0m2.958s 00:05:28.657 user 0m2.667s 00:05:28.657 sys 0m0.282s 00:05:28.657 15:29:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.657 15:29:07 -- common/autotest_common.sh@10 -- # set +x 00:05:28.657 ************************************ 00:05:28.657 END TEST accel_dualcast 00:05:28.657 ************************************ 00:05:28.657 15:29:07 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:28.657 15:29:07 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:28.657 15:29:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.657 15:29:07 -- common/autotest_common.sh@10 -- # set +x 00:05:28.657 ************************************ 00:05:28.657 START TEST accel_compare 00:05:28.657 ************************************ 00:05:28.657 15:29:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:05:28.657 15:29:07 -- accel/accel.sh@16 -- # local accel_opc 00:05:28.657 15:29:07 -- accel/accel.sh@17 -- # local accel_module 00:05:28.657 15:29:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:05:28.657 15:29:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:28.657 15:29:07 -- accel/accel.sh@12 -- # build_accel_config 00:05:28.657 15:29:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:28.657 15:29:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:28.657 15:29:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:28.657 15:29:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:28.657 15:29:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:28.657 15:29:07 -- accel/accel.sh@41 -- # local IFS=, 00:05:28.657 15:29:07 -- accel/accel.sh@42 -- # jq -r . 00:05:28.657 [2024-07-10 15:29:07.857378] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:28.657 [2024-07-10 15:29:07.857464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004400 ] 00:05:28.657 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.657 [2024-07-10 15:29:07.918980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.915 [2024-07-10 15:29:08.041369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.286 15:29:09 -- accel/accel.sh@18 -- # out=' 00:05:30.286 SPDK Configuration: 00:05:30.286 Core mask: 0x1 00:05:30.286 00:05:30.286 Accel Perf Configuration: 00:05:30.286 Workload Type: compare 00:05:30.286 Transfer size: 4096 bytes 00:05:30.286 Vector count 1 00:05:30.286 Module: software 00:05:30.286 Queue depth: 32 00:05:30.286 Allocate depth: 32 00:05:30.286 # threads/core: 1 00:05:30.286 Run time: 1 seconds 00:05:30.286 Verify: Yes 00:05:30.286 00:05:30.286 Running for 1 seconds... 00:05:30.286 00:05:30.286 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:30.286 ------------------------------------------------------------------------------------ 00:05:30.286 0,0 400032/s 1562 MiB/s 0 0 00:05:30.286 ==================================================================================== 00:05:30.286 Total 400032/s 1562 MiB/s 0 0' 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # IFS=: 00:05:30.286 15:29:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # read -r var val 00:05:30.286 15:29:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:05:30.286 15:29:09 -- accel/accel.sh@12 -- # build_accel_config 00:05:30.286 15:29:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:30.286 15:29:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:30.286 15:29:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:30.286 15:29:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:30.286 15:29:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:30.286 15:29:09 -- accel/accel.sh@41 -- # local IFS=, 00:05:30.286 15:29:09 -- accel/accel.sh@42 -- # jq -r . 00:05:30.286 [2024-07-10 15:29:09.345102] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:30.286 [2024-07-10 15:29:09.345182] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004540 ] 00:05:30.286 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.286 [2024-07-10 15:29:09.408561] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.286 [2024-07-10 15:29:09.528339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.286 15:29:09 -- accel/accel.sh@21 -- # val= 00:05:30.286 15:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # IFS=: 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # read -r var val 00:05:30.286 15:29:09 -- accel/accel.sh@21 -- # val= 00:05:30.286 15:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # IFS=: 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # read -r var val 00:05:30.286 15:29:09 -- accel/accel.sh@21 -- # val=0x1 00:05:30.286 15:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # IFS=: 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # read -r var val 00:05:30.286 15:29:09 -- accel/accel.sh@21 -- # val= 00:05:30.286 15:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # IFS=: 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # read -r var val 00:05:30.286 15:29:09 -- accel/accel.sh@21 -- # val= 00:05:30.286 15:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # IFS=: 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # read -r var val 00:05:30.286 15:29:09 -- accel/accel.sh@21 -- # val=compare 00:05:30.286 15:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.286 15:29:09 -- accel/accel.sh@24 -- # accel_opc=compare 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # IFS=: 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # read -r var val 00:05:30.286 15:29:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:30.286 15:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # IFS=: 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # read -r var val 00:05:30.286 15:29:09 -- accel/accel.sh@21 -- # val= 00:05:30.286 15:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # IFS=: 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # read -r var val 00:05:30.286 15:29:09 -- accel/accel.sh@21 -- # val=software 00:05:30.286 15:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.286 15:29:09 -- accel/accel.sh@23 -- # accel_module=software 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # IFS=: 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # read -r var val 00:05:30.286 15:29:09 -- accel/accel.sh@21 -- # val=32 00:05:30.286 15:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # IFS=: 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # read -r var val 00:05:30.286 15:29:09 -- accel/accel.sh@21 -- # val=32 00:05:30.286 15:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # IFS=: 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # read -r var val 00:05:30.286 15:29:09 -- accel/accel.sh@21 -- # val=1 00:05:30.286 15:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # IFS=: 00:05:30.286 15:29:09 -- accel/accel.sh@20 -- # read -r var val 00:05:30.286 15:29:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:30.286 15:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.287 15:29:09 -- accel/accel.sh@20 -- # IFS=: 00:05:30.287 15:29:09 -- accel/accel.sh@20 -- # read -r var val 00:05:30.287 15:29:09 -- accel/accel.sh@21 -- # val=Yes 00:05:30.287 15:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.287 15:29:09 -- accel/accel.sh@20 -- # IFS=: 00:05:30.287 15:29:09 -- accel/accel.sh@20 -- # read -r var val 00:05:30.287 15:29:09 -- accel/accel.sh@21 -- # val= 00:05:30.287 15:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.287 15:29:09 -- accel/accel.sh@20 -- # IFS=: 00:05:30.287 15:29:09 -- accel/accel.sh@20 -- # read -r var val 00:05:30.287 15:29:09 -- accel/accel.sh@21 -- # val= 00:05:30.287 15:29:09 -- accel/accel.sh@22 -- # case "$var" in 00:05:30.287 15:29:09 -- accel/accel.sh@20 -- # IFS=: 00:05:30.287 15:29:09 -- accel/accel.sh@20 -- # read -r var val 00:05:31.659 15:29:10 -- accel/accel.sh@21 -- # val= 00:05:31.659 15:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.659 15:29:10 -- accel/accel.sh@20 -- # IFS=: 00:05:31.659 15:29:10 -- accel/accel.sh@20 -- # read -r var val 00:05:31.659 15:29:10 -- accel/accel.sh@21 -- # val= 00:05:31.659 15:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.659 15:29:10 -- accel/accel.sh@20 -- # IFS=: 00:05:31.659 15:29:10 -- accel/accel.sh@20 -- # read -r var val 00:05:31.659 15:29:10 -- accel/accel.sh@21 -- # val= 00:05:31.659 15:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.659 15:29:10 -- accel/accel.sh@20 -- # IFS=: 00:05:31.659 15:29:10 -- accel/accel.sh@20 -- # read -r var val 00:05:31.659 15:29:10 -- accel/accel.sh@21 -- # val= 00:05:31.659 15:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.659 15:29:10 -- accel/accel.sh@20 -- # IFS=: 00:05:31.659 15:29:10 -- accel/accel.sh@20 -- # read -r var val 00:05:31.659 15:29:10 -- accel/accel.sh@21 -- # val= 00:05:31.659 15:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.660 15:29:10 -- accel/accel.sh@20 -- # IFS=: 00:05:31.660 15:29:10 -- accel/accel.sh@20 -- # read -r var val 00:05:31.660 15:29:10 -- accel/accel.sh@21 -- # val= 00:05:31.660 15:29:10 -- accel/accel.sh@22 -- # case "$var" in 00:05:31.660 15:29:10 -- accel/accel.sh@20 -- # IFS=: 00:05:31.660 15:29:10 -- accel/accel.sh@20 -- # read -r var val 00:05:31.660 15:29:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:31.660 15:29:10 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:05:31.660 15:29:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:31.660 00:05:31.660 real 0m2.975s 00:05:31.660 user 0m2.676s 00:05:31.660 sys 0m0.290s 00:05:31.660 15:29:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.660 15:29:10 -- common/autotest_common.sh@10 -- # set +x 00:05:31.660 ************************************ 00:05:31.660 END TEST accel_compare 00:05:31.660 ************************************ 00:05:31.660 15:29:10 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:31.660 15:29:10 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:05:31.660 15:29:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.660 15:29:10 -- common/autotest_common.sh@10 -- # set +x 00:05:31.660 ************************************ 00:05:31.660 START TEST accel_xor 00:05:31.660 ************************************ 00:05:31.660 15:29:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:05:31.660 15:29:10 -- accel/accel.sh@16 -- # local accel_opc 00:05:31.660 15:29:10 -- accel/accel.sh@17 -- # local accel_module 00:05:31.660 15:29:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:05:31.660 15:29:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:31.660 15:29:10 -- accel/accel.sh@12 -- # build_accel_config 00:05:31.660 15:29:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:31.660 15:29:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.660 15:29:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.660 15:29:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:31.660 15:29:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:31.660 15:29:10 -- accel/accel.sh@41 -- # local IFS=, 00:05:31.660 15:29:10 -- accel/accel.sh@42 -- # jq -r . 00:05:31.660 [2024-07-10 15:29:10.860632] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:31.660 [2024-07-10 15:29:10.860710] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004818 ] 00:05:31.660 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.660 [2024-07-10 15:29:10.922514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.917 [2024-07-10 15:29:11.045817] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.288 15:29:12 -- accel/accel.sh@18 -- # out=' 00:05:33.288 SPDK Configuration: 00:05:33.288 Core mask: 0x1 00:05:33.288 00:05:33.288 Accel Perf Configuration: 00:05:33.288 Workload Type: xor 00:05:33.288 Source buffers: 2 00:05:33.288 Transfer size: 4096 bytes 00:05:33.288 Vector count 1 00:05:33.288 Module: software 00:05:33.288 Queue depth: 32 00:05:33.288 Allocate depth: 32 00:05:33.288 # threads/core: 1 00:05:33.288 Run time: 1 seconds 00:05:33.288 Verify: Yes 00:05:33.288 00:05:33.288 Running for 1 seconds... 00:05:33.288 00:05:33.288 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:33.288 ------------------------------------------------------------------------------------ 00:05:33.288 0,0 191680/s 748 MiB/s 0 0 00:05:33.288 ==================================================================================== 00:05:33.288 Total 191680/s 748 MiB/s 0 0' 00:05:33.288 15:29:12 -- accel/accel.sh@20 -- # IFS=: 00:05:33.288 15:29:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:33.288 15:29:12 -- accel/accel.sh@20 -- # read -r var val 00:05:33.288 15:29:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:05:33.288 15:29:12 -- accel/accel.sh@12 -- # build_accel_config 00:05:33.288 15:29:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:33.288 15:29:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.288 15:29:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.288 15:29:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:33.288 15:29:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:33.288 15:29:12 -- accel/accel.sh@41 -- # local IFS=, 00:05:33.288 15:29:12 -- accel/accel.sh@42 -- # jq -r . 00:05:33.288 [2024-07-10 15:29:12.338277] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:33.288 [2024-07-10 15:29:12.338357] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2004967 ] 00:05:33.288 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.288 [2024-07-10 15:29:12.400389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.288 [2024-07-10 15:29:12.520171] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.288 15:29:12 -- accel/accel.sh@21 -- # val= 00:05:33.288 15:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # IFS=: 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # read -r var val 00:05:33.289 15:29:12 -- accel/accel.sh@21 -- # val= 00:05:33.289 15:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # IFS=: 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # read -r var val 00:05:33.289 15:29:12 -- accel/accel.sh@21 -- # val=0x1 00:05:33.289 15:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # IFS=: 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # read -r var val 00:05:33.289 15:29:12 -- accel/accel.sh@21 -- # val= 00:05:33.289 15:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # IFS=: 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # read -r var val 00:05:33.289 15:29:12 -- accel/accel.sh@21 -- # val= 00:05:33.289 15:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # IFS=: 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # read -r var val 00:05:33.289 15:29:12 -- accel/accel.sh@21 -- # val=xor 00:05:33.289 15:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.289 15:29:12 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # IFS=: 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # read -r var val 00:05:33.289 15:29:12 -- accel/accel.sh@21 -- # val=2 00:05:33.289 15:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # IFS=: 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # read -r var val 00:05:33.289 15:29:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:33.289 15:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # IFS=: 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # read -r var val 00:05:33.289 15:29:12 -- accel/accel.sh@21 -- # val= 00:05:33.289 15:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # IFS=: 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # read -r var val 00:05:33.289 15:29:12 -- accel/accel.sh@21 -- # val=software 00:05:33.289 15:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.289 15:29:12 -- accel/accel.sh@23 -- # accel_module=software 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # IFS=: 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # read -r var val 00:05:33.289 15:29:12 -- accel/accel.sh@21 -- # val=32 00:05:33.289 15:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # IFS=: 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # read -r var val 00:05:33.289 15:29:12 -- accel/accel.sh@21 -- # val=32 00:05:33.289 15:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # IFS=: 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # read -r var val 00:05:33.289 15:29:12 -- accel/accel.sh@21 -- # val=1 00:05:33.289 15:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # IFS=: 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # read -r var val 00:05:33.289 15:29:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:33.289 15:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # IFS=: 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # read -r var val 00:05:33.289 15:29:12 -- accel/accel.sh@21 -- # val=Yes 00:05:33.289 15:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # IFS=: 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # read -r var val 00:05:33.289 15:29:12 -- accel/accel.sh@21 -- # val= 00:05:33.289 15:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # IFS=: 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # read -r var val 00:05:33.289 15:29:12 -- accel/accel.sh@21 -- # val= 00:05:33.289 15:29:12 -- accel/accel.sh@22 -- # case "$var" in 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # IFS=: 00:05:33.289 15:29:12 -- accel/accel.sh@20 -- # read -r var val 00:05:34.659 15:29:13 -- accel/accel.sh@21 -- # val= 00:05:34.659 15:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.659 15:29:13 -- accel/accel.sh@20 -- # IFS=: 00:05:34.659 15:29:13 -- accel/accel.sh@20 -- # read -r var val 00:05:34.659 15:29:13 -- accel/accel.sh@21 -- # val= 00:05:34.659 15:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.659 15:29:13 -- accel/accel.sh@20 -- # IFS=: 00:05:34.659 15:29:13 -- accel/accel.sh@20 -- # read -r var val 00:05:34.659 15:29:13 -- accel/accel.sh@21 -- # val= 00:05:34.659 15:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.659 15:29:13 -- accel/accel.sh@20 -- # IFS=: 00:05:34.659 15:29:13 -- accel/accel.sh@20 -- # read -r var val 00:05:34.659 15:29:13 -- accel/accel.sh@21 -- # val= 00:05:34.659 15:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.659 15:29:13 -- accel/accel.sh@20 -- # IFS=: 00:05:34.659 15:29:13 -- accel/accel.sh@20 -- # read -r var val 00:05:34.659 15:29:13 -- accel/accel.sh@21 -- # val= 00:05:34.659 15:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.659 15:29:13 -- accel/accel.sh@20 -- # IFS=: 00:05:34.659 15:29:13 -- accel/accel.sh@20 -- # read -r var val 00:05:34.659 15:29:13 -- accel/accel.sh@21 -- # val= 00:05:34.659 15:29:13 -- accel/accel.sh@22 -- # case "$var" in 00:05:34.659 15:29:13 -- accel/accel.sh@20 -- # IFS=: 00:05:34.659 15:29:13 -- accel/accel.sh@20 -- # read -r var val 00:05:34.659 15:29:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:34.659 15:29:13 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:34.659 15:29:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.659 00:05:34.659 real 0m2.965s 00:05:34.659 user 0m2.679s 00:05:34.659 sys 0m0.278s 00:05:34.659 15:29:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.659 15:29:13 -- common/autotest_common.sh@10 -- # set +x 00:05:34.659 ************************************ 00:05:34.659 END TEST accel_xor 00:05:34.659 ************************************ 00:05:34.659 15:29:13 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:34.659 15:29:13 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:34.659 15:29:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.659 15:29:13 -- common/autotest_common.sh@10 -- # set +x 00:05:34.659 ************************************ 00:05:34.659 START TEST accel_xor 00:05:34.659 ************************************ 00:05:34.659 15:29:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:05:34.659 15:29:13 -- accel/accel.sh@16 -- # local accel_opc 00:05:34.659 15:29:13 -- accel/accel.sh@17 -- # local accel_module 00:05:34.659 15:29:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:05:34.659 15:29:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:34.659 15:29:13 -- accel/accel.sh@12 -- # build_accel_config 00:05:34.659 15:29:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:34.659 15:29:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:34.659 15:29:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:34.659 15:29:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:34.659 15:29:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:34.659 15:29:13 -- accel/accel.sh@41 -- # local IFS=, 00:05:34.659 15:29:13 -- accel/accel.sh@42 -- # jq -r . 00:05:34.659 [2024-07-10 15:29:13.851402] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:34.659 [2024-07-10 15:29:13.851502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005127 ] 00:05:34.659 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.659 [2024-07-10 15:29:13.913079] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.659 [2024-07-10 15:29:14.033692] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.033 15:29:15 -- accel/accel.sh@18 -- # out=' 00:05:36.033 SPDK Configuration: 00:05:36.033 Core mask: 0x1 00:05:36.033 00:05:36.033 Accel Perf Configuration: 00:05:36.033 Workload Type: xor 00:05:36.033 Source buffers: 3 00:05:36.033 Transfer size: 4096 bytes 00:05:36.033 Vector count 1 00:05:36.033 Module: software 00:05:36.033 Queue depth: 32 00:05:36.033 Allocate depth: 32 00:05:36.033 # threads/core: 1 00:05:36.033 Run time: 1 seconds 00:05:36.033 Verify: Yes 00:05:36.033 00:05:36.033 Running for 1 seconds... 00:05:36.033 00:05:36.033 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:36.033 ------------------------------------------------------------------------------------ 00:05:36.033 0,0 182656/s 713 MiB/s 0 0 00:05:36.033 ==================================================================================== 00:05:36.033 Total 182656/s 713 MiB/s 0 0' 00:05:36.033 15:29:15 -- accel/accel.sh@20 -- # IFS=: 00:05:36.033 15:29:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:36.033 15:29:15 -- accel/accel.sh@20 -- # read -r var val 00:05:36.033 15:29:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:05:36.033 15:29:15 -- accel/accel.sh@12 -- # build_accel_config 00:05:36.033 15:29:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:36.033 15:29:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:36.033 15:29:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:36.033 15:29:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:36.033 15:29:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:36.033 15:29:15 -- accel/accel.sh@41 -- # local IFS=, 00:05:36.033 15:29:15 -- accel/accel.sh@42 -- # jq -r . 00:05:36.033 [2024-07-10 15:29:15.332331] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:36.033 [2024-07-10 15:29:15.332412] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005386 ] 00:05:36.033 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.033 [2024-07-10 15:29:15.393826] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.292 [2024-07-10 15:29:15.514761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.292 15:29:15 -- accel/accel.sh@21 -- # val= 00:05:36.292 15:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # IFS=: 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # read -r var val 00:05:36.292 15:29:15 -- accel/accel.sh@21 -- # val= 00:05:36.292 15:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # IFS=: 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # read -r var val 00:05:36.292 15:29:15 -- accel/accel.sh@21 -- # val=0x1 00:05:36.292 15:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # IFS=: 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # read -r var val 00:05:36.292 15:29:15 -- accel/accel.sh@21 -- # val= 00:05:36.292 15:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # IFS=: 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # read -r var val 00:05:36.292 15:29:15 -- accel/accel.sh@21 -- # val= 00:05:36.292 15:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # IFS=: 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # read -r var val 00:05:36.292 15:29:15 -- accel/accel.sh@21 -- # val=xor 00:05:36.292 15:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.292 15:29:15 -- accel/accel.sh@24 -- # accel_opc=xor 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # IFS=: 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # read -r var val 00:05:36.292 15:29:15 -- accel/accel.sh@21 -- # val=3 00:05:36.292 15:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # IFS=: 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # read -r var val 00:05:36.292 15:29:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:36.292 15:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # IFS=: 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # read -r var val 00:05:36.292 15:29:15 -- accel/accel.sh@21 -- # val= 00:05:36.292 15:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # IFS=: 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # read -r var val 00:05:36.292 15:29:15 -- accel/accel.sh@21 -- # val=software 00:05:36.292 15:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.292 15:29:15 -- accel/accel.sh@23 -- # accel_module=software 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # IFS=: 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # read -r var val 00:05:36.292 15:29:15 -- accel/accel.sh@21 -- # val=32 00:05:36.292 15:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # IFS=: 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # read -r var val 00:05:36.292 15:29:15 -- accel/accel.sh@21 -- # val=32 00:05:36.292 15:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # IFS=: 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # read -r var val 00:05:36.292 15:29:15 -- accel/accel.sh@21 -- # val=1 00:05:36.292 15:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # IFS=: 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # read -r var val 00:05:36.292 15:29:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:36.292 15:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # IFS=: 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # read -r var val 00:05:36.292 15:29:15 -- accel/accel.sh@21 -- # val=Yes 00:05:36.292 15:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # IFS=: 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # read -r var val 00:05:36.292 15:29:15 -- accel/accel.sh@21 -- # val= 00:05:36.292 15:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # IFS=: 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # read -r var val 00:05:36.292 15:29:15 -- accel/accel.sh@21 -- # val= 00:05:36.292 15:29:15 -- accel/accel.sh@22 -- # case "$var" in 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # IFS=: 00:05:36.292 15:29:15 -- accel/accel.sh@20 -- # read -r var val 00:05:37.667 15:29:16 -- accel/accel.sh@21 -- # val= 00:05:37.667 15:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.667 15:29:16 -- accel/accel.sh@20 -- # IFS=: 00:05:37.667 15:29:16 -- accel/accel.sh@20 -- # read -r var val 00:05:37.667 15:29:16 -- accel/accel.sh@21 -- # val= 00:05:37.667 15:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.667 15:29:16 -- accel/accel.sh@20 -- # IFS=: 00:05:37.667 15:29:16 -- accel/accel.sh@20 -- # read -r var val 00:05:37.667 15:29:16 -- accel/accel.sh@21 -- # val= 00:05:37.667 15:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.667 15:29:16 -- accel/accel.sh@20 -- # IFS=: 00:05:37.667 15:29:16 -- accel/accel.sh@20 -- # read -r var val 00:05:37.667 15:29:16 -- accel/accel.sh@21 -- # val= 00:05:37.667 15:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.667 15:29:16 -- accel/accel.sh@20 -- # IFS=: 00:05:37.667 15:29:16 -- accel/accel.sh@20 -- # read -r var val 00:05:37.667 15:29:16 -- accel/accel.sh@21 -- # val= 00:05:37.667 15:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.667 15:29:16 -- accel/accel.sh@20 -- # IFS=: 00:05:37.667 15:29:16 -- accel/accel.sh@20 -- # read -r var val 00:05:37.667 15:29:16 -- accel/accel.sh@21 -- # val= 00:05:37.667 15:29:16 -- accel/accel.sh@22 -- # case "$var" in 00:05:37.667 15:29:16 -- accel/accel.sh@20 -- # IFS=: 00:05:37.667 15:29:16 -- accel/accel.sh@20 -- # read -r var val 00:05:37.667 15:29:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:37.667 15:29:16 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:05:37.667 15:29:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:37.667 00:05:37.667 real 0m2.967s 00:05:37.667 user 0m2.662s 00:05:37.667 sys 0m0.298s 00:05:37.667 15:29:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.667 15:29:16 -- common/autotest_common.sh@10 -- # set +x 00:05:37.667 ************************************ 00:05:37.667 END TEST accel_xor 00:05:37.667 ************************************ 00:05:37.667 15:29:16 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:37.667 15:29:16 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:37.667 15:29:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.667 15:29:16 -- common/autotest_common.sh@10 -- # set +x 00:05:37.667 ************************************ 00:05:37.667 START TEST accel_dif_verify 00:05:37.667 ************************************ 00:05:37.667 15:29:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:05:37.667 15:29:16 -- accel/accel.sh@16 -- # local accel_opc 00:05:37.667 15:29:16 -- accel/accel.sh@17 -- # local accel_module 00:05:37.667 15:29:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:05:37.667 15:29:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:37.667 15:29:16 -- accel/accel.sh@12 -- # build_accel_config 00:05:37.667 15:29:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:37.667 15:29:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:37.667 15:29:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:37.667 15:29:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:37.667 15:29:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:37.667 15:29:16 -- accel/accel.sh@41 -- # local IFS=, 00:05:37.667 15:29:16 -- accel/accel.sh@42 -- # jq -r . 00:05:37.667 [2024-07-10 15:29:16.843267] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:37.667 [2024-07-10 15:29:16.843349] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005545 ] 00:05:37.667 EAL: No free 2048 kB hugepages reported on node 1 00:05:37.667 [2024-07-10 15:29:16.907288] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.667 [2024-07-10 15:29:17.032824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.038 15:29:18 -- accel/accel.sh@18 -- # out=' 00:05:39.038 SPDK Configuration: 00:05:39.038 Core mask: 0x1 00:05:39.038 00:05:39.038 Accel Perf Configuration: 00:05:39.038 Workload Type: dif_verify 00:05:39.038 Vector size: 4096 bytes 00:05:39.038 Transfer size: 4096 bytes 00:05:39.038 Block size: 512 bytes 00:05:39.038 Metadata size: 8 bytes 00:05:39.038 Vector count 1 00:05:39.038 Module: software 00:05:39.038 Queue depth: 32 00:05:39.038 Allocate depth: 32 00:05:39.038 # threads/core: 1 00:05:39.038 Run time: 1 seconds 00:05:39.038 Verify: No 00:05:39.038 00:05:39.038 Running for 1 seconds... 00:05:39.038 00:05:39.038 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:39.038 ------------------------------------------------------------------------------------ 00:05:39.038 0,0 81120/s 321 MiB/s 0 0 00:05:39.038 ==================================================================================== 00:05:39.038 Total 81120/s 316 MiB/s 0 0' 00:05:39.038 15:29:18 -- accel/accel.sh@20 -- # IFS=: 00:05:39.038 15:29:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:39.038 15:29:18 -- accel/accel.sh@20 -- # read -r var val 00:05:39.038 15:29:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:05:39.038 15:29:18 -- accel/accel.sh@12 -- # build_accel_config 00:05:39.038 15:29:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:39.038 15:29:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:39.038 15:29:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:39.038 15:29:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:39.038 15:29:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:39.038 15:29:18 -- accel/accel.sh@41 -- # local IFS=, 00:05:39.038 15:29:18 -- accel/accel.sh@42 -- # jq -r . 00:05:39.038 [2024-07-10 15:29:18.330894] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:39.038 [2024-07-10 15:29:18.330982] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005695 ] 00:05:39.038 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.038 [2024-07-10 15:29:18.391694] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.351 [2024-07-10 15:29:18.514216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.351 15:29:18 -- accel/accel.sh@21 -- # val= 00:05:39.351 15:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.351 15:29:18 -- accel/accel.sh@20 -- # IFS=: 00:05:39.351 15:29:18 -- accel/accel.sh@20 -- # read -r var val 00:05:39.351 15:29:18 -- accel/accel.sh@21 -- # val= 00:05:39.351 15:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.351 15:29:18 -- accel/accel.sh@20 -- # IFS=: 00:05:39.351 15:29:18 -- accel/accel.sh@20 -- # read -r var val 00:05:39.351 15:29:18 -- accel/accel.sh@21 -- # val=0x1 00:05:39.351 15:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.351 15:29:18 -- accel/accel.sh@20 -- # IFS=: 00:05:39.351 15:29:18 -- accel/accel.sh@20 -- # read -r var val 00:05:39.351 15:29:18 -- accel/accel.sh@21 -- # val= 00:05:39.351 15:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.351 15:29:18 -- accel/accel.sh@20 -- # IFS=: 00:05:39.351 15:29:18 -- accel/accel.sh@20 -- # read -r var val 00:05:39.351 15:29:18 -- accel/accel.sh@21 -- # val= 00:05:39.351 15:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.351 15:29:18 -- accel/accel.sh@20 -- # IFS=: 00:05:39.351 15:29:18 -- accel/accel.sh@20 -- # read -r var val 00:05:39.351 15:29:18 -- accel/accel.sh@21 -- # val=dif_verify 00:05:39.351 15:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.351 15:29:18 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:05:39.351 15:29:18 -- accel/accel.sh@20 -- # IFS=: 00:05:39.351 15:29:18 -- accel/accel.sh@20 -- # read -r var val 00:05:39.351 15:29:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:39.351 15:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.351 15:29:18 -- accel/accel.sh@20 -- # IFS=: 00:05:39.351 15:29:18 -- accel/accel.sh@20 -- # read -r var val 00:05:39.351 15:29:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:39.351 15:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.351 15:29:18 -- accel/accel.sh@20 -- # IFS=: 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # read -r var val 00:05:39.352 15:29:18 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:39.352 15:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # IFS=: 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # read -r var val 00:05:39.352 15:29:18 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:39.352 15:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # IFS=: 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # read -r var val 00:05:39.352 15:29:18 -- accel/accel.sh@21 -- # val= 00:05:39.352 15:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # IFS=: 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # read -r var val 00:05:39.352 15:29:18 -- accel/accel.sh@21 -- # val=software 00:05:39.352 15:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.352 15:29:18 -- accel/accel.sh@23 -- # accel_module=software 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # IFS=: 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # read -r var val 00:05:39.352 15:29:18 -- accel/accel.sh@21 -- # val=32 00:05:39.352 15:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # IFS=: 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # read -r var val 00:05:39.352 15:29:18 -- accel/accel.sh@21 -- # val=32 00:05:39.352 15:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # IFS=: 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # read -r var val 00:05:39.352 15:29:18 -- accel/accel.sh@21 -- # val=1 00:05:39.352 15:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # IFS=: 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # read -r var val 00:05:39.352 15:29:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:39.352 15:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # IFS=: 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # read -r var val 00:05:39.352 15:29:18 -- accel/accel.sh@21 -- # val=No 00:05:39.352 15:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # IFS=: 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # read -r var val 00:05:39.352 15:29:18 -- accel/accel.sh@21 -- # val= 00:05:39.352 15:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # IFS=: 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # read -r var val 00:05:39.352 15:29:18 -- accel/accel.sh@21 -- # val= 00:05:39.352 15:29:18 -- accel/accel.sh@22 -- # case "$var" in 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # IFS=: 00:05:39.352 15:29:18 -- accel/accel.sh@20 -- # read -r var val 00:05:40.720 15:29:19 -- accel/accel.sh@21 -- # val= 00:05:40.720 15:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.720 15:29:19 -- accel/accel.sh@20 -- # IFS=: 00:05:40.720 15:29:19 -- accel/accel.sh@20 -- # read -r var val 00:05:40.720 15:29:19 -- accel/accel.sh@21 -- # val= 00:05:40.720 15:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.720 15:29:19 -- accel/accel.sh@20 -- # IFS=: 00:05:40.720 15:29:19 -- accel/accel.sh@20 -- # read -r var val 00:05:40.720 15:29:19 -- accel/accel.sh@21 -- # val= 00:05:40.720 15:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.720 15:29:19 -- accel/accel.sh@20 -- # IFS=: 00:05:40.720 15:29:19 -- accel/accel.sh@20 -- # read -r var val 00:05:40.720 15:29:19 -- accel/accel.sh@21 -- # val= 00:05:40.720 15:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.720 15:29:19 -- accel/accel.sh@20 -- # IFS=: 00:05:40.720 15:29:19 -- accel/accel.sh@20 -- # read -r var val 00:05:40.720 15:29:19 -- accel/accel.sh@21 -- # val= 00:05:40.720 15:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.721 15:29:19 -- accel/accel.sh@20 -- # IFS=: 00:05:40.721 15:29:19 -- accel/accel.sh@20 -- # read -r var val 00:05:40.721 15:29:19 -- accel/accel.sh@21 -- # val= 00:05:40.721 15:29:19 -- accel/accel.sh@22 -- # case "$var" in 00:05:40.721 15:29:19 -- accel/accel.sh@20 -- # IFS=: 00:05:40.721 15:29:19 -- accel/accel.sh@20 -- # read -r var val 00:05:40.721 15:29:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:40.721 15:29:19 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:05:40.721 15:29:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:40.721 00:05:40.721 real 0m2.963s 00:05:40.721 user 0m2.670s 00:05:40.721 sys 0m0.288s 00:05:40.721 15:29:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.721 15:29:19 -- common/autotest_common.sh@10 -- # set +x 00:05:40.721 ************************************ 00:05:40.721 END TEST accel_dif_verify 00:05:40.721 ************************************ 00:05:40.721 15:29:19 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:40.721 15:29:19 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:40.721 15:29:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.721 15:29:19 -- common/autotest_common.sh@10 -- # set +x 00:05:40.721 ************************************ 00:05:40.721 START TEST accel_dif_generate 00:05:40.721 ************************************ 00:05:40.721 15:29:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:05:40.721 15:29:19 -- accel/accel.sh@16 -- # local accel_opc 00:05:40.721 15:29:19 -- accel/accel.sh@17 -- # local accel_module 00:05:40.721 15:29:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:05:40.721 15:29:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:40.721 15:29:19 -- accel/accel.sh@12 -- # build_accel_config 00:05:40.721 15:29:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:40.721 15:29:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:40.721 15:29:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:40.721 15:29:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:40.721 15:29:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:40.721 15:29:19 -- accel/accel.sh@41 -- # local IFS=, 00:05:40.721 15:29:19 -- accel/accel.sh@42 -- # jq -r . 00:05:40.721 [2024-07-10 15:29:19.835983] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:40.721 [2024-07-10 15:29:19.836070] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2005967 ] 00:05:40.721 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.721 [2024-07-10 15:29:19.897472] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.721 [2024-07-10 15:29:20.022078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.094 15:29:21 -- accel/accel.sh@18 -- # out=' 00:05:42.094 SPDK Configuration: 00:05:42.094 Core mask: 0x1 00:05:42.094 00:05:42.094 Accel Perf Configuration: 00:05:42.094 Workload Type: dif_generate 00:05:42.094 Vector size: 4096 bytes 00:05:42.094 Transfer size: 4096 bytes 00:05:42.094 Block size: 512 bytes 00:05:42.094 Metadata size: 8 bytes 00:05:42.094 Vector count 1 00:05:42.094 Module: software 00:05:42.094 Queue depth: 32 00:05:42.094 Allocate depth: 32 00:05:42.094 # threads/core: 1 00:05:42.094 Run time: 1 seconds 00:05:42.094 Verify: No 00:05:42.094 00:05:42.094 Running for 1 seconds... 00:05:42.094 00:05:42.094 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:42.094 ------------------------------------------------------------------------------------ 00:05:42.094 0,0 96128/s 381 MiB/s 0 0 00:05:42.094 ==================================================================================== 00:05:42.094 Total 96128/s 375 MiB/s 0 0' 00:05:42.094 15:29:21 -- accel/accel.sh@20 -- # IFS=: 00:05:42.094 15:29:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:42.094 15:29:21 -- accel/accel.sh@20 -- # read -r var val 00:05:42.094 15:29:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:05:42.094 15:29:21 -- accel/accel.sh@12 -- # build_accel_config 00:05:42.094 15:29:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:42.094 15:29:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:42.094 15:29:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:42.094 15:29:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:42.094 15:29:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:42.094 15:29:21 -- accel/accel.sh@41 -- # local IFS=, 00:05:42.094 15:29:21 -- accel/accel.sh@42 -- # jq -r . 00:05:42.094 [2024-07-10 15:29:21.326457] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:42.094 [2024-07-10 15:29:21.326538] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2006113 ] 00:05:42.094 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.094 [2024-07-10 15:29:21.387311] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.353 [2024-07-10 15:29:21.509567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.353 15:29:21 -- accel/accel.sh@21 -- # val= 00:05:42.353 15:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # IFS=: 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # read -r var val 00:05:42.353 15:29:21 -- accel/accel.sh@21 -- # val= 00:05:42.353 15:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # IFS=: 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # read -r var val 00:05:42.353 15:29:21 -- accel/accel.sh@21 -- # val=0x1 00:05:42.353 15:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # IFS=: 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # read -r var val 00:05:42.353 15:29:21 -- accel/accel.sh@21 -- # val= 00:05:42.353 15:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # IFS=: 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # read -r var val 00:05:42.353 15:29:21 -- accel/accel.sh@21 -- # val= 00:05:42.353 15:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # IFS=: 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # read -r var val 00:05:42.353 15:29:21 -- accel/accel.sh@21 -- # val=dif_generate 00:05:42.353 15:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.353 15:29:21 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # IFS=: 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # read -r var val 00:05:42.353 15:29:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:42.353 15:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # IFS=: 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # read -r var val 00:05:42.353 15:29:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:42.353 15:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # IFS=: 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # read -r var val 00:05:42.353 15:29:21 -- accel/accel.sh@21 -- # val='512 bytes' 00:05:42.353 15:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # IFS=: 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # read -r var val 00:05:42.353 15:29:21 -- accel/accel.sh@21 -- # val='8 bytes' 00:05:42.353 15:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # IFS=: 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # read -r var val 00:05:42.353 15:29:21 -- accel/accel.sh@21 -- # val= 00:05:42.353 15:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # IFS=: 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # read -r var val 00:05:42.353 15:29:21 -- accel/accel.sh@21 -- # val=software 00:05:42.353 15:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.353 15:29:21 -- accel/accel.sh@23 -- # accel_module=software 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # IFS=: 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # read -r var val 00:05:42.353 15:29:21 -- accel/accel.sh@21 -- # val=32 00:05:42.353 15:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # IFS=: 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # read -r var val 00:05:42.353 15:29:21 -- accel/accel.sh@21 -- # val=32 00:05:42.353 15:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # IFS=: 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # read -r var val 00:05:42.353 15:29:21 -- accel/accel.sh@21 -- # val=1 00:05:42.353 15:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # IFS=: 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # read -r var val 00:05:42.353 15:29:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:42.353 15:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # IFS=: 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # read -r var val 00:05:42.353 15:29:21 -- accel/accel.sh@21 -- # val=No 00:05:42.353 15:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # IFS=: 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # read -r var val 00:05:42.353 15:29:21 -- accel/accel.sh@21 -- # val= 00:05:42.353 15:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # IFS=: 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # read -r var val 00:05:42.353 15:29:21 -- accel/accel.sh@21 -- # val= 00:05:42.353 15:29:21 -- accel/accel.sh@22 -- # case "$var" in 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # IFS=: 00:05:42.353 15:29:21 -- accel/accel.sh@20 -- # read -r var val 00:05:43.788 15:29:22 -- accel/accel.sh@21 -- # val= 00:05:43.788 15:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.788 15:29:22 -- accel/accel.sh@20 -- # IFS=: 00:05:43.788 15:29:22 -- accel/accel.sh@20 -- # read -r var val 00:05:43.788 15:29:22 -- accel/accel.sh@21 -- # val= 00:05:43.788 15:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.788 15:29:22 -- accel/accel.sh@20 -- # IFS=: 00:05:43.788 15:29:22 -- accel/accel.sh@20 -- # read -r var val 00:05:43.788 15:29:22 -- accel/accel.sh@21 -- # val= 00:05:43.788 15:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.788 15:29:22 -- accel/accel.sh@20 -- # IFS=: 00:05:43.788 15:29:22 -- accel/accel.sh@20 -- # read -r var val 00:05:43.788 15:29:22 -- accel/accel.sh@21 -- # val= 00:05:43.788 15:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.788 15:29:22 -- accel/accel.sh@20 -- # IFS=: 00:05:43.788 15:29:22 -- accel/accel.sh@20 -- # read -r var val 00:05:43.788 15:29:22 -- accel/accel.sh@21 -- # val= 00:05:43.788 15:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.788 15:29:22 -- accel/accel.sh@20 -- # IFS=: 00:05:43.788 15:29:22 -- accel/accel.sh@20 -- # read -r var val 00:05:43.788 15:29:22 -- accel/accel.sh@21 -- # val= 00:05:43.788 15:29:22 -- accel/accel.sh@22 -- # case "$var" in 00:05:43.788 15:29:22 -- accel/accel.sh@20 -- # IFS=: 00:05:43.788 15:29:22 -- accel/accel.sh@20 -- # read -r var val 00:05:43.788 15:29:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:43.788 15:29:22 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:05:43.788 15:29:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:43.788 00:05:43.788 real 0m2.969s 00:05:43.788 user 0m2.672s 00:05:43.789 sys 0m0.291s 00:05:43.789 15:29:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.789 15:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:43.789 ************************************ 00:05:43.789 END TEST accel_dif_generate 00:05:43.789 ************************************ 00:05:43.789 15:29:22 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:43.789 15:29:22 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:43.789 15:29:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.789 15:29:22 -- common/autotest_common.sh@10 -- # set +x 00:05:43.789 ************************************ 00:05:43.789 START TEST accel_dif_generate_copy 00:05:43.789 ************************************ 00:05:43.789 15:29:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:05:43.789 15:29:22 -- accel/accel.sh@16 -- # local accel_opc 00:05:43.789 15:29:22 -- accel/accel.sh@17 -- # local accel_module 00:05:43.789 15:29:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:05:43.789 15:29:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:43.789 15:29:22 -- accel/accel.sh@12 -- # build_accel_config 00:05:43.789 15:29:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:43.789 15:29:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:43.789 15:29:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:43.789 15:29:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:43.789 15:29:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:43.789 15:29:22 -- accel/accel.sh@41 -- # local IFS=, 00:05:43.789 15:29:22 -- accel/accel.sh@42 -- # jq -r . 00:05:43.789 [2024-07-10 15:29:22.830394] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:43.789 [2024-07-10 15:29:22.830492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2006280 ] 00:05:43.789 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.789 [2024-07-10 15:29:22.892242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.789 [2024-07-10 15:29:23.016912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.160 15:29:24 -- accel/accel.sh@18 -- # out=' 00:05:45.161 SPDK Configuration: 00:05:45.161 Core mask: 0x1 00:05:45.161 00:05:45.161 Accel Perf Configuration: 00:05:45.161 Workload Type: dif_generate_copy 00:05:45.161 Vector size: 4096 bytes 00:05:45.161 Transfer size: 4096 bytes 00:05:45.161 Vector count 1 00:05:45.161 Module: software 00:05:45.161 Queue depth: 32 00:05:45.161 Allocate depth: 32 00:05:45.161 # threads/core: 1 00:05:45.161 Run time: 1 seconds 00:05:45.161 Verify: No 00:05:45.161 00:05:45.161 Running for 1 seconds... 00:05:45.161 00:05:45.161 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:45.161 ------------------------------------------------------------------------------------ 00:05:45.161 0,0 76224/s 302 MiB/s 0 0 00:05:45.161 ==================================================================================== 00:05:45.161 Total 76224/s 297 MiB/s 0 0' 00:05:45.161 15:29:24 -- accel/accel.sh@20 -- # IFS=: 00:05:45.161 15:29:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:45.161 15:29:24 -- accel/accel.sh@20 -- # read -r var val 00:05:45.161 15:29:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:05:45.161 15:29:24 -- accel/accel.sh@12 -- # build_accel_config 00:05:45.161 15:29:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:45.161 15:29:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:45.161 15:29:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:45.161 15:29:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:45.161 15:29:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:45.161 15:29:24 -- accel/accel.sh@41 -- # local IFS=, 00:05:45.161 15:29:24 -- accel/accel.sh@42 -- # jq -r . 00:05:45.161 [2024-07-10 15:29:24.312957] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:45.161 [2024-07-10 15:29:24.313043] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2006545 ] 00:05:45.161 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.161 [2024-07-10 15:29:24.374246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.161 [2024-07-10 15:29:24.494238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.419 15:29:24 -- accel/accel.sh@21 -- # val= 00:05:45.419 15:29:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.419 15:29:24 -- accel/accel.sh@20 -- # IFS=: 00:05:45.419 15:29:24 -- accel/accel.sh@20 -- # read -r var val 00:05:45.419 15:29:24 -- accel/accel.sh@21 -- # val= 00:05:45.419 15:29:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.419 15:29:24 -- accel/accel.sh@20 -- # IFS=: 00:05:45.419 15:29:24 -- accel/accel.sh@20 -- # read -r var val 00:05:45.419 15:29:24 -- accel/accel.sh@21 -- # val=0x1 00:05:45.419 15:29:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.419 15:29:24 -- accel/accel.sh@20 -- # IFS=: 00:05:45.419 15:29:24 -- accel/accel.sh@20 -- # read -r var val 00:05:45.419 15:29:24 -- accel/accel.sh@21 -- # val= 00:05:45.419 15:29:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.419 15:29:24 -- accel/accel.sh@20 -- # IFS=: 00:05:45.419 15:29:24 -- accel/accel.sh@20 -- # read -r var val 00:05:45.419 15:29:24 -- accel/accel.sh@21 -- # val= 00:05:45.419 15:29:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.419 15:29:24 -- accel/accel.sh@20 -- # IFS=: 00:05:45.419 15:29:24 -- accel/accel.sh@20 -- # read -r var val 00:05:45.419 15:29:24 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:05:45.420 15:29:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.420 15:29:24 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # IFS=: 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # read -r var val 00:05:45.420 15:29:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:45.420 15:29:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # IFS=: 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # read -r var val 00:05:45.420 15:29:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:45.420 15:29:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # IFS=: 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # read -r var val 00:05:45.420 15:29:24 -- accel/accel.sh@21 -- # val= 00:05:45.420 15:29:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # IFS=: 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # read -r var val 00:05:45.420 15:29:24 -- accel/accel.sh@21 -- # val=software 00:05:45.420 15:29:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.420 15:29:24 -- accel/accel.sh@23 -- # accel_module=software 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # IFS=: 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # read -r var val 00:05:45.420 15:29:24 -- accel/accel.sh@21 -- # val=32 00:05:45.420 15:29:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # IFS=: 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # read -r var val 00:05:45.420 15:29:24 -- accel/accel.sh@21 -- # val=32 00:05:45.420 15:29:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # IFS=: 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # read -r var val 00:05:45.420 15:29:24 -- accel/accel.sh@21 -- # val=1 00:05:45.420 15:29:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # IFS=: 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # read -r var val 00:05:45.420 15:29:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:45.420 15:29:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # IFS=: 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # read -r var val 00:05:45.420 15:29:24 -- accel/accel.sh@21 -- # val=No 00:05:45.420 15:29:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # IFS=: 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # read -r var val 00:05:45.420 15:29:24 -- accel/accel.sh@21 -- # val= 00:05:45.420 15:29:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # IFS=: 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # read -r var val 00:05:45.420 15:29:24 -- accel/accel.sh@21 -- # val= 00:05:45.420 15:29:24 -- accel/accel.sh@22 -- # case "$var" in 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # IFS=: 00:05:45.420 15:29:24 -- accel/accel.sh@20 -- # read -r var val 00:05:46.793 15:29:25 -- accel/accel.sh@21 -- # val= 00:05:46.794 15:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.794 15:29:25 -- accel/accel.sh@20 -- # IFS=: 00:05:46.794 15:29:25 -- accel/accel.sh@20 -- # read -r var val 00:05:46.794 15:29:25 -- accel/accel.sh@21 -- # val= 00:05:46.794 15:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.794 15:29:25 -- accel/accel.sh@20 -- # IFS=: 00:05:46.794 15:29:25 -- accel/accel.sh@20 -- # read -r var val 00:05:46.794 15:29:25 -- accel/accel.sh@21 -- # val= 00:05:46.794 15:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.794 15:29:25 -- accel/accel.sh@20 -- # IFS=: 00:05:46.794 15:29:25 -- accel/accel.sh@20 -- # read -r var val 00:05:46.794 15:29:25 -- accel/accel.sh@21 -- # val= 00:05:46.794 15:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.794 15:29:25 -- accel/accel.sh@20 -- # IFS=: 00:05:46.794 15:29:25 -- accel/accel.sh@20 -- # read -r var val 00:05:46.794 15:29:25 -- accel/accel.sh@21 -- # val= 00:05:46.794 15:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.794 15:29:25 -- accel/accel.sh@20 -- # IFS=: 00:05:46.794 15:29:25 -- accel/accel.sh@20 -- # read -r var val 00:05:46.794 15:29:25 -- accel/accel.sh@21 -- # val= 00:05:46.794 15:29:25 -- accel/accel.sh@22 -- # case "$var" in 00:05:46.794 15:29:25 -- accel/accel.sh@20 -- # IFS=: 00:05:46.794 15:29:25 -- accel/accel.sh@20 -- # read -r var val 00:05:46.794 15:29:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:46.794 15:29:25 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:05:46.794 15:29:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:46.794 00:05:46.794 real 0m2.970s 00:05:46.794 user 0m2.675s 00:05:46.794 sys 0m0.287s 00:05:46.794 15:29:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.794 15:29:25 -- common/autotest_common.sh@10 -- # set +x 00:05:46.794 ************************************ 00:05:46.794 END TEST accel_dif_generate_copy 00:05:46.794 ************************************ 00:05:46.794 15:29:25 -- accel/accel.sh@107 -- # [[ y == y ]] 00:05:46.794 15:29:25 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:46.794 15:29:25 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:05:46.794 15:29:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.794 15:29:25 -- common/autotest_common.sh@10 -- # set +x 00:05:46.794 ************************************ 00:05:46.794 START TEST accel_comp 00:05:46.794 ************************************ 00:05:46.794 15:29:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:46.794 15:29:25 -- accel/accel.sh@16 -- # local accel_opc 00:05:46.794 15:29:25 -- accel/accel.sh@17 -- # local accel_module 00:05:46.794 15:29:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:46.794 15:29:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:46.794 15:29:25 -- accel/accel.sh@12 -- # build_accel_config 00:05:46.794 15:29:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:46.794 15:29:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:46.794 15:29:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:46.794 15:29:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:46.794 15:29:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:46.794 15:29:25 -- accel/accel.sh@41 -- # local IFS=, 00:05:46.794 15:29:25 -- accel/accel.sh@42 -- # jq -r . 00:05:46.794 [2024-07-10 15:29:25.829421] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:46.794 [2024-07-10 15:29:25.829540] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2006698 ] 00:05:46.794 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.794 [2024-07-10 15:29:25.893175] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.794 [2024-07-10 15:29:26.010632] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.168 15:29:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:48.168 00:05:48.168 SPDK Configuration: 00:05:48.168 Core mask: 0x1 00:05:48.168 00:05:48.168 Accel Perf Configuration: 00:05:48.168 Workload Type: compress 00:05:48.168 Transfer size: 4096 bytes 00:05:48.168 Vector count 1 00:05:48.168 Module: software 00:05:48.168 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.168 Queue depth: 32 00:05:48.168 Allocate depth: 32 00:05:48.168 # threads/core: 1 00:05:48.168 Run time: 1 seconds 00:05:48.168 Verify: No 00:05:48.168 00:05:48.168 Running for 1 seconds... 00:05:48.168 00:05:48.168 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:48.168 ------------------------------------------------------------------------------------ 00:05:48.168 0,0 32320/s 134 MiB/s 0 0 00:05:48.168 ==================================================================================== 00:05:48.168 Total 32320/s 126 MiB/s 0 0' 00:05:48.168 15:29:27 -- accel/accel.sh@20 -- # IFS=: 00:05:48.168 15:29:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.168 15:29:27 -- accel/accel.sh@20 -- # read -r var val 00:05:48.168 15:29:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.168 15:29:27 -- accel/accel.sh@12 -- # build_accel_config 00:05:48.168 15:29:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:48.168 15:29:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:48.168 15:29:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:48.168 15:29:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:48.168 15:29:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:48.168 15:29:27 -- accel/accel.sh@41 -- # local IFS=, 00:05:48.168 15:29:27 -- accel/accel.sh@42 -- # jq -r . 00:05:48.168 [2024-07-10 15:29:27.309012] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:48.168 [2024-07-10 15:29:27.309091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2006848 ] 00:05:48.168 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.168 [2024-07-10 15:29:27.371061] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.168 [2024-07-10 15:29:27.487928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.426 15:29:27 -- accel/accel.sh@21 -- # val= 00:05:48.426 15:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.426 15:29:27 -- accel/accel.sh@20 -- # IFS=: 00:05:48.426 15:29:27 -- accel/accel.sh@20 -- # read -r var val 00:05:48.426 15:29:27 -- accel/accel.sh@21 -- # val= 00:05:48.426 15:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.426 15:29:27 -- accel/accel.sh@20 -- # IFS=: 00:05:48.426 15:29:27 -- accel/accel.sh@20 -- # read -r var val 00:05:48.426 15:29:27 -- accel/accel.sh@21 -- # val= 00:05:48.426 15:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.426 15:29:27 -- accel/accel.sh@20 -- # IFS=: 00:05:48.426 15:29:27 -- accel/accel.sh@20 -- # read -r var val 00:05:48.426 15:29:27 -- accel/accel.sh@21 -- # val=0x1 00:05:48.426 15:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.426 15:29:27 -- accel/accel.sh@20 -- # IFS=: 00:05:48.426 15:29:27 -- accel/accel.sh@20 -- # read -r var val 00:05:48.427 15:29:27 -- accel/accel.sh@21 -- # val= 00:05:48.427 15:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # IFS=: 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # read -r var val 00:05:48.427 15:29:27 -- accel/accel.sh@21 -- # val= 00:05:48.427 15:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # IFS=: 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # read -r var val 00:05:48.427 15:29:27 -- accel/accel.sh@21 -- # val=compress 00:05:48.427 15:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.427 15:29:27 -- accel/accel.sh@24 -- # accel_opc=compress 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # IFS=: 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # read -r var val 00:05:48.427 15:29:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:48.427 15:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # IFS=: 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # read -r var val 00:05:48.427 15:29:27 -- accel/accel.sh@21 -- # val= 00:05:48.427 15:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # IFS=: 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # read -r var val 00:05:48.427 15:29:27 -- accel/accel.sh@21 -- # val=software 00:05:48.427 15:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.427 15:29:27 -- accel/accel.sh@23 -- # accel_module=software 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # IFS=: 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # read -r var val 00:05:48.427 15:29:27 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:48.427 15:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # IFS=: 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # read -r var val 00:05:48.427 15:29:27 -- accel/accel.sh@21 -- # val=32 00:05:48.427 15:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # IFS=: 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # read -r var val 00:05:48.427 15:29:27 -- accel/accel.sh@21 -- # val=32 00:05:48.427 15:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # IFS=: 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # read -r var val 00:05:48.427 15:29:27 -- accel/accel.sh@21 -- # val=1 00:05:48.427 15:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # IFS=: 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # read -r var val 00:05:48.427 15:29:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:48.427 15:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # IFS=: 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # read -r var val 00:05:48.427 15:29:27 -- accel/accel.sh@21 -- # val=No 00:05:48.427 15:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # IFS=: 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # read -r var val 00:05:48.427 15:29:27 -- accel/accel.sh@21 -- # val= 00:05:48.427 15:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # IFS=: 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # read -r var val 00:05:48.427 15:29:27 -- accel/accel.sh@21 -- # val= 00:05:48.427 15:29:27 -- accel/accel.sh@22 -- # case "$var" in 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # IFS=: 00:05:48.427 15:29:27 -- accel/accel.sh@20 -- # read -r var val 00:05:49.801 15:29:28 -- accel/accel.sh@21 -- # val= 00:05:49.801 15:29:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.801 15:29:28 -- accel/accel.sh@20 -- # IFS=: 00:05:49.801 15:29:28 -- accel/accel.sh@20 -- # read -r var val 00:05:49.801 15:29:28 -- accel/accel.sh@21 -- # val= 00:05:49.801 15:29:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.801 15:29:28 -- accel/accel.sh@20 -- # IFS=: 00:05:49.801 15:29:28 -- accel/accel.sh@20 -- # read -r var val 00:05:49.801 15:29:28 -- accel/accel.sh@21 -- # val= 00:05:49.801 15:29:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.801 15:29:28 -- accel/accel.sh@20 -- # IFS=: 00:05:49.801 15:29:28 -- accel/accel.sh@20 -- # read -r var val 00:05:49.801 15:29:28 -- accel/accel.sh@21 -- # val= 00:05:49.801 15:29:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.801 15:29:28 -- accel/accel.sh@20 -- # IFS=: 00:05:49.801 15:29:28 -- accel/accel.sh@20 -- # read -r var val 00:05:49.801 15:29:28 -- accel/accel.sh@21 -- # val= 00:05:49.801 15:29:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.801 15:29:28 -- accel/accel.sh@20 -- # IFS=: 00:05:49.801 15:29:28 -- accel/accel.sh@20 -- # read -r var val 00:05:49.801 15:29:28 -- accel/accel.sh@21 -- # val= 00:05:49.801 15:29:28 -- accel/accel.sh@22 -- # case "$var" in 00:05:49.801 15:29:28 -- accel/accel.sh@20 -- # IFS=: 00:05:49.801 15:29:28 -- accel/accel.sh@20 -- # read -r var val 00:05:49.801 15:29:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:49.801 15:29:28 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:05:49.801 15:29:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:49.801 00:05:49.801 real 0m2.958s 00:05:49.801 user 0m2.660s 00:05:49.801 sys 0m0.291s 00:05:49.801 15:29:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.801 15:29:28 -- common/autotest_common.sh@10 -- # set +x 00:05:49.801 ************************************ 00:05:49.801 END TEST accel_comp 00:05:49.801 ************************************ 00:05:49.801 15:29:28 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:49.801 15:29:28 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:05:49.801 15:29:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:49.801 15:29:28 -- common/autotest_common.sh@10 -- # set +x 00:05:49.801 ************************************ 00:05:49.801 START TEST accel_decomp 00:05:49.801 ************************************ 00:05:49.801 15:29:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:49.801 15:29:28 -- accel/accel.sh@16 -- # local accel_opc 00:05:49.801 15:29:28 -- accel/accel.sh@17 -- # local accel_module 00:05:49.801 15:29:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:49.801 15:29:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:49.801 15:29:28 -- accel/accel.sh@12 -- # build_accel_config 00:05:49.801 15:29:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:49.801 15:29:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:49.801 15:29:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:49.801 15:29:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:49.801 15:29:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:49.801 15:29:28 -- accel/accel.sh@41 -- # local IFS=, 00:05:49.801 15:29:28 -- accel/accel.sh@42 -- # jq -r . 00:05:49.801 [2024-07-10 15:29:28.814388] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:49.802 [2024-07-10 15:29:28.814487] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2007126 ] 00:05:49.802 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.802 [2024-07-10 15:29:28.876727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.802 [2024-07-10 15:29:28.997732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.174 15:29:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:51.174 00:05:51.174 SPDK Configuration: 00:05:51.174 Core mask: 0x1 00:05:51.174 00:05:51.174 Accel Perf Configuration: 00:05:51.174 Workload Type: decompress 00:05:51.174 Transfer size: 4096 bytes 00:05:51.174 Vector count 1 00:05:51.174 Module: software 00:05:51.174 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:51.174 Queue depth: 32 00:05:51.174 Allocate depth: 32 00:05:51.174 # threads/core: 1 00:05:51.174 Run time: 1 seconds 00:05:51.174 Verify: Yes 00:05:51.174 00:05:51.174 Running for 1 seconds... 00:05:51.174 00:05:51.174 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:51.174 ------------------------------------------------------------------------------------ 00:05:51.174 0,0 55424/s 102 MiB/s 0 0 00:05:51.174 ==================================================================================== 00:05:51.174 Total 55424/s 216 MiB/s 0 0' 00:05:51.174 15:29:30 -- accel/accel.sh@20 -- # IFS=: 00:05:51.174 15:29:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:51.174 15:29:30 -- accel/accel.sh@20 -- # read -r var val 00:05:51.174 15:29:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:05:51.174 15:29:30 -- accel/accel.sh@12 -- # build_accel_config 00:05:51.174 15:29:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:51.174 15:29:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:51.174 15:29:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:51.174 15:29:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:51.174 15:29:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:51.174 15:29:30 -- accel/accel.sh@41 -- # local IFS=, 00:05:51.174 15:29:30 -- accel/accel.sh@42 -- # jq -r . 00:05:51.174 [2024-07-10 15:29:30.304878] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:51.174 [2024-07-10 15:29:30.304959] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2007272 ] 00:05:51.174 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.174 [2024-07-10 15:29:30.366548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.174 [2024-07-10 15:29:30.486619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.433 15:29:30 -- accel/accel.sh@21 -- # val= 00:05:51.433 15:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # IFS=: 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # read -r var val 00:05:51.433 15:29:30 -- accel/accel.sh@21 -- # val= 00:05:51.433 15:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # IFS=: 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # read -r var val 00:05:51.433 15:29:30 -- accel/accel.sh@21 -- # val= 00:05:51.433 15:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # IFS=: 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # read -r var val 00:05:51.433 15:29:30 -- accel/accel.sh@21 -- # val=0x1 00:05:51.433 15:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # IFS=: 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # read -r var val 00:05:51.433 15:29:30 -- accel/accel.sh@21 -- # val= 00:05:51.433 15:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # IFS=: 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # read -r var val 00:05:51.433 15:29:30 -- accel/accel.sh@21 -- # val= 00:05:51.433 15:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # IFS=: 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # read -r var val 00:05:51.433 15:29:30 -- accel/accel.sh@21 -- # val=decompress 00:05:51.433 15:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.433 15:29:30 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # IFS=: 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # read -r var val 00:05:51.433 15:29:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:51.433 15:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # IFS=: 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # read -r var val 00:05:51.433 15:29:30 -- accel/accel.sh@21 -- # val= 00:05:51.433 15:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # IFS=: 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # read -r var val 00:05:51.433 15:29:30 -- accel/accel.sh@21 -- # val=software 00:05:51.433 15:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.433 15:29:30 -- accel/accel.sh@23 -- # accel_module=software 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # IFS=: 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # read -r var val 00:05:51.433 15:29:30 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:51.433 15:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # IFS=: 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # read -r var val 00:05:51.433 15:29:30 -- accel/accel.sh@21 -- # val=32 00:05:51.433 15:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # IFS=: 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # read -r var val 00:05:51.433 15:29:30 -- accel/accel.sh@21 -- # val=32 00:05:51.433 15:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # IFS=: 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # read -r var val 00:05:51.433 15:29:30 -- accel/accel.sh@21 -- # val=1 00:05:51.433 15:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # IFS=: 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # read -r var val 00:05:51.433 15:29:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:51.433 15:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # IFS=: 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # read -r var val 00:05:51.433 15:29:30 -- accel/accel.sh@21 -- # val=Yes 00:05:51.433 15:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # IFS=: 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # read -r var val 00:05:51.433 15:29:30 -- accel/accel.sh@21 -- # val= 00:05:51.433 15:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # IFS=: 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # read -r var val 00:05:51.433 15:29:30 -- accel/accel.sh@21 -- # val= 00:05:51.433 15:29:30 -- accel/accel.sh@22 -- # case "$var" in 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # IFS=: 00:05:51.433 15:29:30 -- accel/accel.sh@20 -- # read -r var val 00:05:52.807 15:29:31 -- accel/accel.sh@21 -- # val= 00:05:52.807 15:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.807 15:29:31 -- accel/accel.sh@20 -- # IFS=: 00:05:52.807 15:29:31 -- accel/accel.sh@20 -- # read -r var val 00:05:52.807 15:29:31 -- accel/accel.sh@21 -- # val= 00:05:52.807 15:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.807 15:29:31 -- accel/accel.sh@20 -- # IFS=: 00:05:52.807 15:29:31 -- accel/accel.sh@20 -- # read -r var val 00:05:52.807 15:29:31 -- accel/accel.sh@21 -- # val= 00:05:52.807 15:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.807 15:29:31 -- accel/accel.sh@20 -- # IFS=: 00:05:52.807 15:29:31 -- accel/accel.sh@20 -- # read -r var val 00:05:52.807 15:29:31 -- accel/accel.sh@21 -- # val= 00:05:52.807 15:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.807 15:29:31 -- accel/accel.sh@20 -- # IFS=: 00:05:52.807 15:29:31 -- accel/accel.sh@20 -- # read -r var val 00:05:52.807 15:29:31 -- accel/accel.sh@21 -- # val= 00:05:52.807 15:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.807 15:29:31 -- accel/accel.sh@20 -- # IFS=: 00:05:52.807 15:29:31 -- accel/accel.sh@20 -- # read -r var val 00:05:52.807 15:29:31 -- accel/accel.sh@21 -- # val= 00:05:52.807 15:29:31 -- accel/accel.sh@22 -- # case "$var" in 00:05:52.807 15:29:31 -- accel/accel.sh@20 -- # IFS=: 00:05:52.807 15:29:31 -- accel/accel.sh@20 -- # read -r var val 00:05:52.807 15:29:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:52.807 15:29:31 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:52.807 15:29:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:52.807 00:05:52.807 real 0m2.979s 00:05:52.807 user 0m2.682s 00:05:52.807 sys 0m0.291s 00:05:52.807 15:29:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.807 15:29:31 -- common/autotest_common.sh@10 -- # set +x 00:05:52.807 ************************************ 00:05:52.807 END TEST accel_decomp 00:05:52.807 ************************************ 00:05:52.807 15:29:31 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:52.807 15:29:31 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:05:52.807 15:29:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.807 15:29:31 -- common/autotest_common.sh@10 -- # set +x 00:05:52.807 ************************************ 00:05:52.807 START TEST accel_decmop_full 00:05:52.807 ************************************ 00:05:52.807 15:29:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:52.807 15:29:31 -- accel/accel.sh@16 -- # local accel_opc 00:05:52.807 15:29:31 -- accel/accel.sh@17 -- # local accel_module 00:05:52.807 15:29:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:52.807 15:29:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:52.807 15:29:31 -- accel/accel.sh@12 -- # build_accel_config 00:05:52.807 15:29:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:52.807 15:29:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:52.807 15:29:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:52.808 15:29:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:52.808 15:29:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:52.808 15:29:31 -- accel/accel.sh@41 -- # local IFS=, 00:05:52.808 15:29:31 -- accel/accel.sh@42 -- # jq -r . 00:05:52.808 [2024-07-10 15:29:31.819249] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:52.808 [2024-07-10 15:29:31.819330] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2007427 ] 00:05:52.808 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.808 [2024-07-10 15:29:31.881941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.808 [2024-07-10 15:29:31.999301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.180 15:29:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:54.180 00:05:54.180 SPDK Configuration: 00:05:54.180 Core mask: 0x1 00:05:54.180 00:05:54.180 Accel Perf Configuration: 00:05:54.180 Workload Type: decompress 00:05:54.180 Transfer size: 111250 bytes 00:05:54.180 Vector count 1 00:05:54.180 Module: software 00:05:54.180 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:54.180 Queue depth: 32 00:05:54.180 Allocate depth: 32 00:05:54.180 # threads/core: 1 00:05:54.180 Run time: 1 seconds 00:05:54.180 Verify: Yes 00:05:54.180 00:05:54.180 Running for 1 seconds... 00:05:54.180 00:05:54.180 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:54.180 ------------------------------------------------------------------------------------ 00:05:54.180 0,0 3808/s 157 MiB/s 0 0 00:05:54.180 ==================================================================================== 00:05:54.180 Total 3808/s 404 MiB/s 0 0' 00:05:54.180 15:29:33 -- accel/accel.sh@20 -- # IFS=: 00:05:54.180 15:29:33 -- accel/accel.sh@20 -- # read -r var val 00:05:54.180 15:29:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:54.180 15:29:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:05:54.180 15:29:33 -- accel/accel.sh@12 -- # build_accel_config 00:05:54.180 15:29:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:54.180 15:29:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:54.180 15:29:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:54.180 15:29:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:54.180 15:29:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:54.180 15:29:33 -- accel/accel.sh@41 -- # local IFS=, 00:05:54.180 15:29:33 -- accel/accel.sh@42 -- # jq -r . 00:05:54.180 [2024-07-10 15:29:33.314005] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:54.180 [2024-07-10 15:29:33.314087] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2007664 ] 00:05:54.180 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.180 [2024-07-10 15:29:33.375783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.180 [2024-07-10 15:29:33.495779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.436 15:29:33 -- accel/accel.sh@21 -- # val= 00:05:54.436 15:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # IFS=: 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # read -r var val 00:05:54.437 15:29:33 -- accel/accel.sh@21 -- # val= 00:05:54.437 15:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # IFS=: 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # read -r var val 00:05:54.437 15:29:33 -- accel/accel.sh@21 -- # val= 00:05:54.437 15:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # IFS=: 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # read -r var val 00:05:54.437 15:29:33 -- accel/accel.sh@21 -- # val=0x1 00:05:54.437 15:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # IFS=: 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # read -r var val 00:05:54.437 15:29:33 -- accel/accel.sh@21 -- # val= 00:05:54.437 15:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # IFS=: 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # read -r var val 00:05:54.437 15:29:33 -- accel/accel.sh@21 -- # val= 00:05:54.437 15:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # IFS=: 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # read -r var val 00:05:54.437 15:29:33 -- accel/accel.sh@21 -- # val=decompress 00:05:54.437 15:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.437 15:29:33 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # IFS=: 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # read -r var val 00:05:54.437 15:29:33 -- accel/accel.sh@21 -- # val='111250 bytes' 00:05:54.437 15:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # IFS=: 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # read -r var val 00:05:54.437 15:29:33 -- accel/accel.sh@21 -- # val= 00:05:54.437 15:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # IFS=: 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # read -r var val 00:05:54.437 15:29:33 -- accel/accel.sh@21 -- # val=software 00:05:54.437 15:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.437 15:29:33 -- accel/accel.sh@23 -- # accel_module=software 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # IFS=: 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # read -r var val 00:05:54.437 15:29:33 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:54.437 15:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # IFS=: 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # read -r var val 00:05:54.437 15:29:33 -- accel/accel.sh@21 -- # val=32 00:05:54.437 15:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # IFS=: 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # read -r var val 00:05:54.437 15:29:33 -- accel/accel.sh@21 -- # val=32 00:05:54.437 15:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # IFS=: 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # read -r var val 00:05:54.437 15:29:33 -- accel/accel.sh@21 -- # val=1 00:05:54.437 15:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # IFS=: 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # read -r var val 00:05:54.437 15:29:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:54.437 15:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # IFS=: 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # read -r var val 00:05:54.437 15:29:33 -- accel/accel.sh@21 -- # val=Yes 00:05:54.437 15:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # IFS=: 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # read -r var val 00:05:54.437 15:29:33 -- accel/accel.sh@21 -- # val= 00:05:54.437 15:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # IFS=: 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # read -r var val 00:05:54.437 15:29:33 -- accel/accel.sh@21 -- # val= 00:05:54.437 15:29:33 -- accel/accel.sh@22 -- # case "$var" in 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # IFS=: 00:05:54.437 15:29:33 -- accel/accel.sh@20 -- # read -r var val 00:05:55.810 15:29:34 -- accel/accel.sh@21 -- # val= 00:05:55.810 15:29:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.810 15:29:34 -- accel/accel.sh@20 -- # IFS=: 00:05:55.810 15:29:34 -- accel/accel.sh@20 -- # read -r var val 00:05:55.810 15:29:34 -- accel/accel.sh@21 -- # val= 00:05:55.810 15:29:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.810 15:29:34 -- accel/accel.sh@20 -- # IFS=: 00:05:55.810 15:29:34 -- accel/accel.sh@20 -- # read -r var val 00:05:55.810 15:29:34 -- accel/accel.sh@21 -- # val= 00:05:55.810 15:29:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.810 15:29:34 -- accel/accel.sh@20 -- # IFS=: 00:05:55.810 15:29:34 -- accel/accel.sh@20 -- # read -r var val 00:05:55.810 15:29:34 -- accel/accel.sh@21 -- # val= 00:05:55.810 15:29:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.810 15:29:34 -- accel/accel.sh@20 -- # IFS=: 00:05:55.810 15:29:34 -- accel/accel.sh@20 -- # read -r var val 00:05:55.810 15:29:34 -- accel/accel.sh@21 -- # val= 00:05:55.810 15:29:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.810 15:29:34 -- accel/accel.sh@20 -- # IFS=: 00:05:55.810 15:29:34 -- accel/accel.sh@20 -- # read -r var val 00:05:55.810 15:29:34 -- accel/accel.sh@21 -- # val= 00:05:55.810 15:29:34 -- accel/accel.sh@22 -- # case "$var" in 00:05:55.810 15:29:34 -- accel/accel.sh@20 -- # IFS=: 00:05:55.810 15:29:34 -- accel/accel.sh@20 -- # read -r var val 00:05:55.810 15:29:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:55.810 15:29:34 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:55.810 15:29:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:55.810 00:05:55.810 real 0m2.994s 00:05:55.810 user 0m2.698s 00:05:55.810 sys 0m0.289s 00:05:55.810 15:29:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.810 15:29:34 -- common/autotest_common.sh@10 -- # set +x 00:05:55.810 ************************************ 00:05:55.810 END TEST accel_decmop_full 00:05:55.810 ************************************ 00:05:55.810 15:29:34 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:55.810 15:29:34 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:05:55.810 15:29:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:55.810 15:29:34 -- common/autotest_common.sh@10 -- # set +x 00:05:55.810 ************************************ 00:05:55.810 START TEST accel_decomp_mcore 00:05:55.810 ************************************ 00:05:55.810 15:29:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:55.810 15:29:34 -- accel/accel.sh@16 -- # local accel_opc 00:05:55.810 15:29:34 -- accel/accel.sh@17 -- # local accel_module 00:05:55.810 15:29:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:55.810 15:29:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:55.810 15:29:34 -- accel/accel.sh@12 -- # build_accel_config 00:05:55.810 15:29:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:55.810 15:29:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:55.810 15:29:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:55.810 15:29:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:55.810 15:29:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:55.810 15:29:34 -- accel/accel.sh@41 -- # local IFS=, 00:05:55.810 15:29:34 -- accel/accel.sh@42 -- # jq -r . 00:05:55.810 [2024-07-10 15:29:34.839020] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:55.810 [2024-07-10 15:29:34.839105] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2007853 ] 00:05:55.810 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.810 [2024-07-10 15:29:34.900870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:55.810 [2024-07-10 15:29:35.024671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:55.810 [2024-07-10 15:29:35.024724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:55.810 [2024-07-10 15:29:35.024775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.810 [2024-07-10 15:29:35.024778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.183 15:29:36 -- accel/accel.sh@18 -- # out='Preparing input file... 00:05:57.183 00:05:57.183 SPDK Configuration: 00:05:57.183 Core mask: 0xf 00:05:57.183 00:05:57.183 Accel Perf Configuration: 00:05:57.183 Workload Type: decompress 00:05:57.183 Transfer size: 4096 bytes 00:05:57.183 Vector count 1 00:05:57.183 Module: software 00:05:57.183 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:57.183 Queue depth: 32 00:05:57.183 Allocate depth: 32 00:05:57.183 # threads/core: 1 00:05:57.183 Run time: 1 seconds 00:05:57.183 Verify: Yes 00:05:57.183 00:05:57.183 Running for 1 seconds... 00:05:57.183 00:05:57.183 Core,Thread Transfers Bandwidth Failed Miscompares 00:05:57.183 ------------------------------------------------------------------------------------ 00:05:57.183 0,0 50240/s 92 MiB/s 0 0 00:05:57.183 3,0 50752/s 93 MiB/s 0 0 00:05:57.183 2,0 50528/s 93 MiB/s 0 0 00:05:57.183 1,0 50624/s 93 MiB/s 0 0 00:05:57.183 ==================================================================================== 00:05:57.183 Total 202144/s 789 MiB/s 0 0' 00:05:57.183 15:29:36 -- accel/accel.sh@20 -- # IFS=: 00:05:57.183 15:29:36 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:57.183 15:29:36 -- accel/accel.sh@20 -- # read -r var val 00:05:57.183 15:29:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:05:57.183 15:29:36 -- accel/accel.sh@12 -- # build_accel_config 00:05:57.183 15:29:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:57.183 15:29:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:57.183 15:29:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:57.183 15:29:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:57.183 15:29:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:57.183 15:29:36 -- accel/accel.sh@41 -- # local IFS=, 00:05:57.183 15:29:36 -- accel/accel.sh@42 -- # jq -r . 00:05:57.183 [2024-07-10 15:29:36.340115] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:57.183 [2024-07-10 15:29:36.340196] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2008002 ] 00:05:57.183 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.183 [2024-07-10 15:29:36.402293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.183 [2024-07-10 15:29:36.523775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.183 [2024-07-10 15:29:36.523827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.183 [2024-07-10 15:29:36.523881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.183 [2024-07-10 15:29:36.523885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.441 15:29:36 -- accel/accel.sh@21 -- # val= 00:05:57.441 15:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.441 15:29:36 -- accel/accel.sh@20 -- # IFS=: 00:05:57.441 15:29:36 -- accel/accel.sh@20 -- # read -r var val 00:05:57.441 15:29:36 -- accel/accel.sh@21 -- # val= 00:05:57.441 15:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.441 15:29:36 -- accel/accel.sh@20 -- # IFS=: 00:05:57.441 15:29:36 -- accel/accel.sh@20 -- # read -r var val 00:05:57.441 15:29:36 -- accel/accel.sh@21 -- # val= 00:05:57.441 15:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.441 15:29:36 -- accel/accel.sh@20 -- # IFS=: 00:05:57.441 15:29:36 -- accel/accel.sh@20 -- # read -r var val 00:05:57.441 15:29:36 -- accel/accel.sh@21 -- # val=0xf 00:05:57.442 15:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # IFS=: 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # read -r var val 00:05:57.442 15:29:36 -- accel/accel.sh@21 -- # val= 00:05:57.442 15:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # IFS=: 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # read -r var val 00:05:57.442 15:29:36 -- accel/accel.sh@21 -- # val= 00:05:57.442 15:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # IFS=: 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # read -r var val 00:05:57.442 15:29:36 -- accel/accel.sh@21 -- # val=decompress 00:05:57.442 15:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.442 15:29:36 -- accel/accel.sh@24 -- # accel_opc=decompress 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # IFS=: 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # read -r var val 00:05:57.442 15:29:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:05:57.442 15:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # IFS=: 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # read -r var val 00:05:57.442 15:29:36 -- accel/accel.sh@21 -- # val= 00:05:57.442 15:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # IFS=: 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # read -r var val 00:05:57.442 15:29:36 -- accel/accel.sh@21 -- # val=software 00:05:57.442 15:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.442 15:29:36 -- accel/accel.sh@23 -- # accel_module=software 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # IFS=: 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # read -r var val 00:05:57.442 15:29:36 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:05:57.442 15:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # IFS=: 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # read -r var val 00:05:57.442 15:29:36 -- accel/accel.sh@21 -- # val=32 00:05:57.442 15:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # IFS=: 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # read -r var val 00:05:57.442 15:29:36 -- accel/accel.sh@21 -- # val=32 00:05:57.442 15:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # IFS=: 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # read -r var val 00:05:57.442 15:29:36 -- accel/accel.sh@21 -- # val=1 00:05:57.442 15:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # IFS=: 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # read -r var val 00:05:57.442 15:29:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:05:57.442 15:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # IFS=: 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # read -r var val 00:05:57.442 15:29:36 -- accel/accel.sh@21 -- # val=Yes 00:05:57.442 15:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # IFS=: 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # read -r var val 00:05:57.442 15:29:36 -- accel/accel.sh@21 -- # val= 00:05:57.442 15:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # IFS=: 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # read -r var val 00:05:57.442 15:29:36 -- accel/accel.sh@21 -- # val= 00:05:57.442 15:29:36 -- accel/accel.sh@22 -- # case "$var" in 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # IFS=: 00:05:57.442 15:29:36 -- accel/accel.sh@20 -- # read -r var val 00:05:58.817 15:29:37 -- accel/accel.sh@21 -- # val= 00:05:58.817 15:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.817 15:29:37 -- accel/accel.sh@20 -- # IFS=: 00:05:58.817 15:29:37 -- accel/accel.sh@20 -- # read -r var val 00:05:58.817 15:29:37 -- accel/accel.sh@21 -- # val= 00:05:58.817 15:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.817 15:29:37 -- accel/accel.sh@20 -- # IFS=: 00:05:58.817 15:29:37 -- accel/accel.sh@20 -- # read -r var val 00:05:58.817 15:29:37 -- accel/accel.sh@21 -- # val= 00:05:58.817 15:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.817 15:29:37 -- accel/accel.sh@20 -- # IFS=: 00:05:58.817 15:29:37 -- accel/accel.sh@20 -- # read -r var val 00:05:58.817 15:29:37 -- accel/accel.sh@21 -- # val= 00:05:58.817 15:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.817 15:29:37 -- accel/accel.sh@20 -- # IFS=: 00:05:58.817 15:29:37 -- accel/accel.sh@20 -- # read -r var val 00:05:58.817 15:29:37 -- accel/accel.sh@21 -- # val= 00:05:58.817 15:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.817 15:29:37 -- accel/accel.sh@20 -- # IFS=: 00:05:58.817 15:29:37 -- accel/accel.sh@20 -- # read -r var val 00:05:58.817 15:29:37 -- accel/accel.sh@21 -- # val= 00:05:58.817 15:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.817 15:29:37 -- accel/accel.sh@20 -- # IFS=: 00:05:58.817 15:29:37 -- accel/accel.sh@20 -- # read -r var val 00:05:58.817 15:29:37 -- accel/accel.sh@21 -- # val= 00:05:58.817 15:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.817 15:29:37 -- accel/accel.sh@20 -- # IFS=: 00:05:58.817 15:29:37 -- accel/accel.sh@20 -- # read -r var val 00:05:58.817 15:29:37 -- accel/accel.sh@21 -- # val= 00:05:58.817 15:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.817 15:29:37 -- accel/accel.sh@20 -- # IFS=: 00:05:58.817 15:29:37 -- accel/accel.sh@20 -- # read -r var val 00:05:58.817 15:29:37 -- accel/accel.sh@21 -- # val= 00:05:58.817 15:29:37 -- accel/accel.sh@22 -- # case "$var" in 00:05:58.817 15:29:37 -- accel/accel.sh@20 -- # IFS=: 00:05:58.817 15:29:37 -- accel/accel.sh@20 -- # read -r var val 00:05:58.817 15:29:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:05:58.817 15:29:37 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:05:58.817 15:29:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:58.817 00:05:58.817 real 0m2.990s 00:05:58.817 user 0m9.609s 00:05:58.817 sys 0m0.311s 00:05:58.817 15:29:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.817 15:29:37 -- common/autotest_common.sh@10 -- # set +x 00:05:58.817 ************************************ 00:05:58.817 END TEST accel_decomp_mcore 00:05:58.817 ************************************ 00:05:58.817 15:29:37 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:58.817 15:29:37 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:05:58.817 15:29:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:58.817 15:29:37 -- common/autotest_common.sh@10 -- # set +x 00:05:58.817 ************************************ 00:05:58.817 START TEST accel_decomp_full_mcore 00:05:58.817 ************************************ 00:05:58.817 15:29:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:58.817 15:29:37 -- accel/accel.sh@16 -- # local accel_opc 00:05:58.817 15:29:37 -- accel/accel.sh@17 -- # local accel_module 00:05:58.817 15:29:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:58.817 15:29:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:58.817 15:29:37 -- accel/accel.sh@12 -- # build_accel_config 00:05:58.817 15:29:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:05:58.817 15:29:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:58.817 15:29:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:58.817 15:29:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:05:58.817 15:29:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:05:58.817 15:29:37 -- accel/accel.sh@41 -- # local IFS=, 00:05:58.817 15:29:37 -- accel/accel.sh@42 -- # jq -r . 00:05:58.817 [2024-07-10 15:29:37.855197] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:58.817 [2024-07-10 15:29:37.855283] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2008286 ] 00:05:58.817 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.817 [2024-07-10 15:29:37.921478] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.817 [2024-07-10 15:29:38.044391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.817 [2024-07-10 15:29:38.044491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.817 [2024-07-10 15:29:38.044488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:58.817 [2024-07-10 15:29:38.044460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.208 15:29:39 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:00.208 00:06:00.208 SPDK Configuration: 00:06:00.208 Core mask: 0xf 00:06:00.208 00:06:00.208 Accel Perf Configuration: 00:06:00.208 Workload Type: decompress 00:06:00.208 Transfer size: 111250 bytes 00:06:00.208 Vector count 1 00:06:00.208 Module: software 00:06:00.208 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:00.208 Queue depth: 32 00:06:00.208 Allocate depth: 32 00:06:00.208 # threads/core: 1 00:06:00.208 Run time: 1 seconds 00:06:00.208 Verify: Yes 00:06:00.208 00:06:00.208 Running for 1 seconds... 00:06:00.208 00:06:00.208 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:00.208 ------------------------------------------------------------------------------------ 00:06:00.208 0,0 3776/s 155 MiB/s 0 0 00:06:00.208 3,0 3776/s 155 MiB/s 0 0 00:06:00.208 2,0 3776/s 155 MiB/s 0 0 00:06:00.208 1,0 3776/s 155 MiB/s 0 0 00:06:00.208 ==================================================================================== 00:06:00.208 Total 15104/s 1602 MiB/s 0 0' 00:06:00.208 15:29:39 -- accel/accel.sh@20 -- # IFS=: 00:06:00.208 15:29:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:00.209 15:29:39 -- accel/accel.sh@20 -- # read -r var val 00:06:00.209 15:29:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:00.209 15:29:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:00.209 15:29:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:00.209 15:29:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:00.209 15:29:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:00.209 15:29:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:00.209 15:29:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:00.209 15:29:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:00.209 15:29:39 -- accel/accel.sh@42 -- # jq -r . 00:06:00.209 [2024-07-10 15:29:39.371003] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:00.209 [2024-07-10 15:29:39.371089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2008436 ] 00:06:00.209 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.209 [2024-07-10 15:29:39.439690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.209 [2024-07-10 15:29:39.562293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.209 [2024-07-10 15:29:39.562345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.209 [2024-07-10 15:29:39.562399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.209 [2024-07-10 15:29:39.562402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.466 15:29:39 -- accel/accel.sh@21 -- # val= 00:06:00.466 15:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # IFS=: 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # read -r var val 00:06:00.466 15:29:39 -- accel/accel.sh@21 -- # val= 00:06:00.466 15:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # IFS=: 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # read -r var val 00:06:00.466 15:29:39 -- accel/accel.sh@21 -- # val= 00:06:00.466 15:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # IFS=: 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # read -r var val 00:06:00.466 15:29:39 -- accel/accel.sh@21 -- # val=0xf 00:06:00.466 15:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # IFS=: 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # read -r var val 00:06:00.466 15:29:39 -- accel/accel.sh@21 -- # val= 00:06:00.466 15:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # IFS=: 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # read -r var val 00:06:00.466 15:29:39 -- accel/accel.sh@21 -- # val= 00:06:00.466 15:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # IFS=: 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # read -r var val 00:06:00.466 15:29:39 -- accel/accel.sh@21 -- # val=decompress 00:06:00.466 15:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.466 15:29:39 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # IFS=: 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # read -r var val 00:06:00.466 15:29:39 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:00.466 15:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # IFS=: 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # read -r var val 00:06:00.466 15:29:39 -- accel/accel.sh@21 -- # val= 00:06:00.466 15:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # IFS=: 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # read -r var val 00:06:00.466 15:29:39 -- accel/accel.sh@21 -- # val=software 00:06:00.466 15:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.466 15:29:39 -- accel/accel.sh@23 -- # accel_module=software 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # IFS=: 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # read -r var val 00:06:00.466 15:29:39 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:00.466 15:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # IFS=: 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # read -r var val 00:06:00.466 15:29:39 -- accel/accel.sh@21 -- # val=32 00:06:00.466 15:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # IFS=: 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # read -r var val 00:06:00.466 15:29:39 -- accel/accel.sh@21 -- # val=32 00:06:00.466 15:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # IFS=: 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # read -r var val 00:06:00.466 15:29:39 -- accel/accel.sh@21 -- # val=1 00:06:00.466 15:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # IFS=: 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # read -r var val 00:06:00.466 15:29:39 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:00.466 15:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # IFS=: 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # read -r var val 00:06:00.466 15:29:39 -- accel/accel.sh@21 -- # val=Yes 00:06:00.466 15:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # IFS=: 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # read -r var val 00:06:00.466 15:29:39 -- accel/accel.sh@21 -- # val= 00:06:00.466 15:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # IFS=: 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # read -r var val 00:06:00.466 15:29:39 -- accel/accel.sh@21 -- # val= 00:06:00.466 15:29:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # IFS=: 00:06:00.466 15:29:39 -- accel/accel.sh@20 -- # read -r var val 00:06:01.837 15:29:40 -- accel/accel.sh@21 -- # val= 00:06:01.837 15:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.837 15:29:40 -- accel/accel.sh@20 -- # IFS=: 00:06:01.837 15:29:40 -- accel/accel.sh@20 -- # read -r var val 00:06:01.837 15:29:40 -- accel/accel.sh@21 -- # val= 00:06:01.837 15:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.837 15:29:40 -- accel/accel.sh@20 -- # IFS=: 00:06:01.837 15:29:40 -- accel/accel.sh@20 -- # read -r var val 00:06:01.837 15:29:40 -- accel/accel.sh@21 -- # val= 00:06:01.837 15:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.837 15:29:40 -- accel/accel.sh@20 -- # IFS=: 00:06:01.837 15:29:40 -- accel/accel.sh@20 -- # read -r var val 00:06:01.837 15:29:40 -- accel/accel.sh@21 -- # val= 00:06:01.837 15:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.837 15:29:40 -- accel/accel.sh@20 -- # IFS=: 00:06:01.837 15:29:40 -- accel/accel.sh@20 -- # read -r var val 00:06:01.837 15:29:40 -- accel/accel.sh@21 -- # val= 00:06:01.837 15:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.837 15:29:40 -- accel/accel.sh@20 -- # IFS=: 00:06:01.837 15:29:40 -- accel/accel.sh@20 -- # read -r var val 00:06:01.837 15:29:40 -- accel/accel.sh@21 -- # val= 00:06:01.837 15:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.837 15:29:40 -- accel/accel.sh@20 -- # IFS=: 00:06:01.837 15:29:40 -- accel/accel.sh@20 -- # read -r var val 00:06:01.837 15:29:40 -- accel/accel.sh@21 -- # val= 00:06:01.837 15:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.837 15:29:40 -- accel/accel.sh@20 -- # IFS=: 00:06:01.837 15:29:40 -- accel/accel.sh@20 -- # read -r var val 00:06:01.837 15:29:40 -- accel/accel.sh@21 -- # val= 00:06:01.837 15:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.837 15:29:40 -- accel/accel.sh@20 -- # IFS=: 00:06:01.837 15:29:40 -- accel/accel.sh@20 -- # read -r var val 00:06:01.837 15:29:40 -- accel/accel.sh@21 -- # val= 00:06:01.837 15:29:40 -- accel/accel.sh@22 -- # case "$var" in 00:06:01.837 15:29:40 -- accel/accel.sh@20 -- # IFS=: 00:06:01.837 15:29:40 -- accel/accel.sh@20 -- # read -r var val 00:06:01.837 15:29:40 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:01.837 15:29:40 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:01.837 15:29:40 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:01.837 00:06:01.837 real 0m3.038s 00:06:01.837 user 0m9.746s 00:06:01.837 sys 0m0.314s 00:06:01.837 15:29:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.837 15:29:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.837 ************************************ 00:06:01.837 END TEST accel_decomp_full_mcore 00:06:01.837 ************************************ 00:06:01.837 15:29:40 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:01.837 15:29:40 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:01.837 15:29:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.837 15:29:40 -- common/autotest_common.sh@10 -- # set +x 00:06:01.837 ************************************ 00:06:01.837 START TEST accel_decomp_mthread 00:06:01.837 ************************************ 00:06:01.837 15:29:40 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:01.837 15:29:40 -- accel/accel.sh@16 -- # local accel_opc 00:06:01.837 15:29:40 -- accel/accel.sh@17 -- # local accel_module 00:06:01.837 15:29:40 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:01.837 15:29:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:01.837 15:29:40 -- accel/accel.sh@12 -- # build_accel_config 00:06:01.837 15:29:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:01.837 15:29:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:01.837 15:29:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:01.837 15:29:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:01.837 15:29:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:01.837 15:29:40 -- accel/accel.sh@41 -- # local IFS=, 00:06:01.837 15:29:40 -- accel/accel.sh@42 -- # jq -r . 00:06:01.837 [2024-07-10 15:29:40.920800] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:01.837 [2024-07-10 15:29:40.920884] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2008592 ] 00:06:01.837 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.837 [2024-07-10 15:29:40.982330] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.837 [2024-07-10 15:29:41.101932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.206 15:29:42 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:03.206 00:06:03.206 SPDK Configuration: 00:06:03.206 Core mask: 0x1 00:06:03.206 00:06:03.206 Accel Perf Configuration: 00:06:03.206 Workload Type: decompress 00:06:03.206 Transfer size: 4096 bytes 00:06:03.206 Vector count 1 00:06:03.206 Module: software 00:06:03.206 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:03.206 Queue depth: 32 00:06:03.206 Allocate depth: 32 00:06:03.206 # threads/core: 2 00:06:03.206 Run time: 1 seconds 00:06:03.206 Verify: Yes 00:06:03.206 00:06:03.206 Running for 1 seconds... 00:06:03.206 00:06:03.206 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:03.206 ------------------------------------------------------------------------------------ 00:06:03.206 0,1 28032/s 51 MiB/s 0 0 00:06:03.206 0,0 27936/s 51 MiB/s 0 0 00:06:03.206 ==================================================================================== 00:06:03.206 Total 55968/s 218 MiB/s 0 0' 00:06:03.206 15:29:42 -- accel/accel.sh@20 -- # IFS=: 00:06:03.206 15:29:42 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:03.206 15:29:42 -- accel/accel.sh@20 -- # read -r var val 00:06:03.206 15:29:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:03.206 15:29:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:03.206 15:29:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:03.206 15:29:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.206 15:29:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.206 15:29:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:03.206 15:29:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:03.206 15:29:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:03.206 15:29:42 -- accel/accel.sh@42 -- # jq -r . 00:06:03.206 [2024-07-10 15:29:42.401056] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:03.206 [2024-07-10 15:29:42.401142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2008862 ] 00:06:03.206 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.206 [2024-07-10 15:29:42.466816] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.463 [2024-07-10 15:29:42.584612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.463 15:29:42 -- accel/accel.sh@21 -- # val= 00:06:03.463 15:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.463 15:29:42 -- accel/accel.sh@20 -- # IFS=: 00:06:03.463 15:29:42 -- accel/accel.sh@20 -- # read -r var val 00:06:03.463 15:29:42 -- accel/accel.sh@21 -- # val= 00:06:03.463 15:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.463 15:29:42 -- accel/accel.sh@20 -- # IFS=: 00:06:03.463 15:29:42 -- accel/accel.sh@20 -- # read -r var val 00:06:03.463 15:29:42 -- accel/accel.sh@21 -- # val= 00:06:03.463 15:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.463 15:29:42 -- accel/accel.sh@20 -- # IFS=: 00:06:03.463 15:29:42 -- accel/accel.sh@20 -- # read -r var val 00:06:03.463 15:29:42 -- accel/accel.sh@21 -- # val=0x1 00:06:03.463 15:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.463 15:29:42 -- accel/accel.sh@20 -- # IFS=: 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # read -r var val 00:06:03.464 15:29:42 -- accel/accel.sh@21 -- # val= 00:06:03.464 15:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # IFS=: 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # read -r var val 00:06:03.464 15:29:42 -- accel/accel.sh@21 -- # val= 00:06:03.464 15:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # IFS=: 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # read -r var val 00:06:03.464 15:29:42 -- accel/accel.sh@21 -- # val=decompress 00:06:03.464 15:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.464 15:29:42 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # IFS=: 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # read -r var val 00:06:03.464 15:29:42 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:03.464 15:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # IFS=: 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # read -r var val 00:06:03.464 15:29:42 -- accel/accel.sh@21 -- # val= 00:06:03.464 15:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # IFS=: 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # read -r var val 00:06:03.464 15:29:42 -- accel/accel.sh@21 -- # val=software 00:06:03.464 15:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.464 15:29:42 -- accel/accel.sh@23 -- # accel_module=software 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # IFS=: 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # read -r var val 00:06:03.464 15:29:42 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:03.464 15:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # IFS=: 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # read -r var val 00:06:03.464 15:29:42 -- accel/accel.sh@21 -- # val=32 00:06:03.464 15:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # IFS=: 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # read -r var val 00:06:03.464 15:29:42 -- accel/accel.sh@21 -- # val=32 00:06:03.464 15:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # IFS=: 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # read -r var val 00:06:03.464 15:29:42 -- accel/accel.sh@21 -- # val=2 00:06:03.464 15:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # IFS=: 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # read -r var val 00:06:03.464 15:29:42 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:03.464 15:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # IFS=: 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # read -r var val 00:06:03.464 15:29:42 -- accel/accel.sh@21 -- # val=Yes 00:06:03.464 15:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # IFS=: 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # read -r var val 00:06:03.464 15:29:42 -- accel/accel.sh@21 -- # val= 00:06:03.464 15:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # IFS=: 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # read -r var val 00:06:03.464 15:29:42 -- accel/accel.sh@21 -- # val= 00:06:03.464 15:29:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # IFS=: 00:06:03.464 15:29:42 -- accel/accel.sh@20 -- # read -r var val 00:06:04.834 15:29:43 -- accel/accel.sh@21 -- # val= 00:06:04.834 15:29:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.834 15:29:43 -- accel/accel.sh@20 -- # IFS=: 00:06:04.834 15:29:43 -- accel/accel.sh@20 -- # read -r var val 00:06:04.834 15:29:43 -- accel/accel.sh@21 -- # val= 00:06:04.834 15:29:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.834 15:29:43 -- accel/accel.sh@20 -- # IFS=: 00:06:04.834 15:29:43 -- accel/accel.sh@20 -- # read -r var val 00:06:04.834 15:29:43 -- accel/accel.sh@21 -- # val= 00:06:04.834 15:29:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.834 15:29:43 -- accel/accel.sh@20 -- # IFS=: 00:06:04.834 15:29:43 -- accel/accel.sh@20 -- # read -r var val 00:06:04.834 15:29:43 -- accel/accel.sh@21 -- # val= 00:06:04.834 15:29:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.834 15:29:43 -- accel/accel.sh@20 -- # IFS=: 00:06:04.834 15:29:43 -- accel/accel.sh@20 -- # read -r var val 00:06:04.834 15:29:43 -- accel/accel.sh@21 -- # val= 00:06:04.834 15:29:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.834 15:29:43 -- accel/accel.sh@20 -- # IFS=: 00:06:04.834 15:29:43 -- accel/accel.sh@20 -- # read -r var val 00:06:04.834 15:29:43 -- accel/accel.sh@21 -- # val= 00:06:04.834 15:29:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.834 15:29:43 -- accel/accel.sh@20 -- # IFS=: 00:06:04.834 15:29:43 -- accel/accel.sh@20 -- # read -r var val 00:06:04.834 15:29:43 -- accel/accel.sh@21 -- # val= 00:06:04.834 15:29:43 -- accel/accel.sh@22 -- # case "$var" in 00:06:04.834 15:29:43 -- accel/accel.sh@20 -- # IFS=: 00:06:04.834 15:29:43 -- accel/accel.sh@20 -- # read -r var val 00:06:04.834 15:29:43 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:04.834 15:29:43 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:04.834 15:29:43 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:04.834 00:06:04.834 real 0m2.961s 00:06:04.834 user 0m2.674s 00:06:04.834 sys 0m0.280s 00:06:04.834 15:29:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.834 15:29:43 -- common/autotest_common.sh@10 -- # set +x 00:06:04.834 ************************************ 00:06:04.834 END TEST accel_decomp_mthread 00:06:04.834 ************************************ 00:06:04.834 15:29:43 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.834 15:29:43 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:04.834 15:29:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.834 15:29:43 -- common/autotest_common.sh@10 -- # set +x 00:06:04.834 ************************************ 00:06:04.834 START TEST accel_deomp_full_mthread 00:06:04.834 ************************************ 00:06:04.834 15:29:43 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.834 15:29:43 -- accel/accel.sh@16 -- # local accel_opc 00:06:04.834 15:29:43 -- accel/accel.sh@17 -- # local accel_module 00:06:04.834 15:29:43 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.834 15:29:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:04.834 15:29:43 -- accel/accel.sh@12 -- # build_accel_config 00:06:04.834 15:29:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.834 15:29:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.834 15:29:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.834 15:29:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.834 15:29:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.834 15:29:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.834 15:29:43 -- accel/accel.sh@42 -- # jq -r . 00:06:04.834 [2024-07-10 15:29:43.906372] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:04.835 [2024-07-10 15:29:43.906458] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2009034 ] 00:06:04.835 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.835 [2024-07-10 15:29:43.967361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.835 [2024-07-10 15:29:44.087765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.209 15:29:45 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:06.209 00:06:06.209 SPDK Configuration: 00:06:06.209 Core mask: 0x1 00:06:06.209 00:06:06.209 Accel Perf Configuration: 00:06:06.209 Workload Type: decompress 00:06:06.209 Transfer size: 111250 bytes 00:06:06.209 Vector count 1 00:06:06.209 Module: software 00:06:06.209 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:06.209 Queue depth: 32 00:06:06.209 Allocate depth: 32 00:06:06.209 # threads/core: 2 00:06:06.209 Run time: 1 seconds 00:06:06.209 Verify: Yes 00:06:06.209 00:06:06.209 Running for 1 seconds... 00:06:06.209 00:06:06.209 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:06.209 ------------------------------------------------------------------------------------ 00:06:06.209 0,1 1952/s 80 MiB/s 0 0 00:06:06.209 0,0 1920/s 79 MiB/s 0 0 00:06:06.209 ==================================================================================== 00:06:06.209 Total 3872/s 410 MiB/s 0 0' 00:06:06.209 15:29:45 -- accel/accel.sh@20 -- # IFS=: 00:06:06.209 15:29:45 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:06.209 15:29:45 -- accel/accel.sh@20 -- # read -r var val 00:06:06.209 15:29:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:06.209 15:29:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.209 15:29:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.209 15:29:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.209 15:29:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.209 15:29:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.209 15:29:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.209 15:29:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.209 15:29:45 -- accel/accel.sh@42 -- # jq -r . 00:06:06.209 [2024-07-10 15:29:45.431988] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:06.209 [2024-07-10 15:29:45.432071] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2009180 ] 00:06:06.209 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.209 [2024-07-10 15:29:45.493179] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.467 [2024-07-10 15:29:45.616000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.467 15:29:45 -- accel/accel.sh@21 -- # val= 00:06:06.467 15:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # IFS=: 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # read -r var val 00:06:06.467 15:29:45 -- accel/accel.sh@21 -- # val= 00:06:06.467 15:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # IFS=: 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # read -r var val 00:06:06.467 15:29:45 -- accel/accel.sh@21 -- # val= 00:06:06.467 15:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # IFS=: 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # read -r var val 00:06:06.467 15:29:45 -- accel/accel.sh@21 -- # val=0x1 00:06:06.467 15:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # IFS=: 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # read -r var val 00:06:06.467 15:29:45 -- accel/accel.sh@21 -- # val= 00:06:06.467 15:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # IFS=: 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # read -r var val 00:06:06.467 15:29:45 -- accel/accel.sh@21 -- # val= 00:06:06.467 15:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # IFS=: 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # read -r var val 00:06:06.467 15:29:45 -- accel/accel.sh@21 -- # val=decompress 00:06:06.467 15:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.467 15:29:45 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # IFS=: 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # read -r var val 00:06:06.467 15:29:45 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:06.467 15:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # IFS=: 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # read -r var val 00:06:06.467 15:29:45 -- accel/accel.sh@21 -- # val= 00:06:06.467 15:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # IFS=: 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # read -r var val 00:06:06.467 15:29:45 -- accel/accel.sh@21 -- # val=software 00:06:06.467 15:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.467 15:29:45 -- accel/accel.sh@23 -- # accel_module=software 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # IFS=: 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # read -r var val 00:06:06.467 15:29:45 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:06.467 15:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # IFS=: 00:06:06.467 15:29:45 -- accel/accel.sh@20 -- # read -r var val 00:06:06.468 15:29:45 -- accel/accel.sh@21 -- # val=32 00:06:06.468 15:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.468 15:29:45 -- accel/accel.sh@20 -- # IFS=: 00:06:06.468 15:29:45 -- accel/accel.sh@20 -- # read -r var val 00:06:06.468 15:29:45 -- accel/accel.sh@21 -- # val=32 00:06:06.468 15:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.468 15:29:45 -- accel/accel.sh@20 -- # IFS=: 00:06:06.468 15:29:45 -- accel/accel.sh@20 -- # read -r var val 00:06:06.468 15:29:45 -- accel/accel.sh@21 -- # val=2 00:06:06.468 15:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.468 15:29:45 -- accel/accel.sh@20 -- # IFS=: 00:06:06.468 15:29:45 -- accel/accel.sh@20 -- # read -r var val 00:06:06.468 15:29:45 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:06.468 15:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.468 15:29:45 -- accel/accel.sh@20 -- # IFS=: 00:06:06.468 15:29:45 -- accel/accel.sh@20 -- # read -r var val 00:06:06.468 15:29:45 -- accel/accel.sh@21 -- # val=Yes 00:06:06.468 15:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.468 15:29:45 -- accel/accel.sh@20 -- # IFS=: 00:06:06.468 15:29:45 -- accel/accel.sh@20 -- # read -r var val 00:06:06.468 15:29:45 -- accel/accel.sh@21 -- # val= 00:06:06.468 15:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.468 15:29:45 -- accel/accel.sh@20 -- # IFS=: 00:06:06.468 15:29:45 -- accel/accel.sh@20 -- # read -r var val 00:06:06.468 15:29:45 -- accel/accel.sh@21 -- # val= 00:06:06.468 15:29:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:06.468 15:29:45 -- accel/accel.sh@20 -- # IFS=: 00:06:06.468 15:29:45 -- accel/accel.sh@20 -- # read -r var val 00:06:07.840 15:29:46 -- accel/accel.sh@21 -- # val= 00:06:07.840 15:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.840 15:29:46 -- accel/accel.sh@20 -- # IFS=: 00:06:07.840 15:29:46 -- accel/accel.sh@20 -- # read -r var val 00:06:07.840 15:29:46 -- accel/accel.sh@21 -- # val= 00:06:07.840 15:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.840 15:29:46 -- accel/accel.sh@20 -- # IFS=: 00:06:07.840 15:29:46 -- accel/accel.sh@20 -- # read -r var val 00:06:07.840 15:29:46 -- accel/accel.sh@21 -- # val= 00:06:07.840 15:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.840 15:29:46 -- accel/accel.sh@20 -- # IFS=: 00:06:07.840 15:29:46 -- accel/accel.sh@20 -- # read -r var val 00:06:07.840 15:29:46 -- accel/accel.sh@21 -- # val= 00:06:07.840 15:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.840 15:29:46 -- accel/accel.sh@20 -- # IFS=: 00:06:07.840 15:29:46 -- accel/accel.sh@20 -- # read -r var val 00:06:07.840 15:29:46 -- accel/accel.sh@21 -- # val= 00:06:07.840 15:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.840 15:29:46 -- accel/accel.sh@20 -- # IFS=: 00:06:07.840 15:29:46 -- accel/accel.sh@20 -- # read -r var val 00:06:07.840 15:29:46 -- accel/accel.sh@21 -- # val= 00:06:07.840 15:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.840 15:29:46 -- accel/accel.sh@20 -- # IFS=: 00:06:07.840 15:29:46 -- accel/accel.sh@20 -- # read -r var val 00:06:07.840 15:29:46 -- accel/accel.sh@21 -- # val= 00:06:07.840 15:29:46 -- accel/accel.sh@22 -- # case "$var" in 00:06:07.840 15:29:46 -- accel/accel.sh@20 -- # IFS=: 00:06:07.840 15:29:46 -- accel/accel.sh@20 -- # read -r var val 00:06:07.840 15:29:46 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:07.840 15:29:46 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:07.840 15:29:46 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.840 00:06:07.840 real 0m3.039s 00:06:07.840 user 0m2.755s 00:06:07.840 sys 0m0.278s 00:06:07.840 15:29:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.840 15:29:46 -- common/autotest_common.sh@10 -- # set +x 00:06:07.840 ************************************ 00:06:07.840 END TEST accel_deomp_full_mthread 00:06:07.840 ************************************ 00:06:07.840 15:29:46 -- accel/accel.sh@116 -- # [[ n == y ]] 00:06:07.840 15:29:46 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:07.840 15:29:46 -- accel/accel.sh@129 -- # build_accel_config 00:06:07.840 15:29:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:07.840 15:29:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:07.840 15:29:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.840 15:29:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.840 15:29:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.840 15:29:46 -- common/autotest_common.sh@10 -- # set +x 00:06:07.840 15:29:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:07.840 15:29:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:07.840 15:29:46 -- accel/accel.sh@41 -- # local IFS=, 00:06:07.840 15:29:46 -- accel/accel.sh@42 -- # jq -r . 00:06:07.840 ************************************ 00:06:07.840 START TEST accel_dif_functional_tests 00:06:07.840 ************************************ 00:06:07.840 15:29:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:07.840 [2024-07-10 15:29:46.992421] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:07.840 [2024-07-10 15:29:46.992521] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2009458 ] 00:06:07.840 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.840 [2024-07-10 15:29:47.054089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.840 [2024-07-10 15:29:47.177260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.840 [2024-07-10 15:29:47.177313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.840 [2024-07-10 15:29:47.177316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.098 00:06:08.098 00:06:08.098 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.098 http://cunit.sourceforge.net/ 00:06:08.098 00:06:08.098 00:06:08.098 Suite: accel_dif 00:06:08.098 Test: verify: DIF generated, GUARD check ...passed 00:06:08.098 Test: verify: DIF generated, APPTAG check ...passed 00:06:08.098 Test: verify: DIF generated, REFTAG check ...passed 00:06:08.098 Test: verify: DIF not generated, GUARD check ...[2024-07-10 15:29:47.271813] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:08.098 [2024-07-10 15:29:47.271876] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:08.098 passed 00:06:08.098 Test: verify: DIF not generated, APPTAG check ...[2024-07-10 15:29:47.271924] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:08.098 [2024-07-10 15:29:47.271953] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:08.098 passed 00:06:08.098 Test: verify: DIF not generated, REFTAG check ...[2024-07-10 15:29:47.271990] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:08.098 [2024-07-10 15:29:47.272017] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:08.098 passed 00:06:08.098 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:08.098 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-10 15:29:47.272088] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:08.098 passed 00:06:08.098 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:08.098 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:08.098 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:08.098 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-10 15:29:47.272243] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:08.098 passed 00:06:08.098 Test: generate copy: DIF generated, GUARD check ...passed 00:06:08.098 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:08.098 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:08.098 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:08.098 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:08.098 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:08.098 Test: generate copy: iovecs-len validate ...[2024-07-10 15:29:47.272511] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:08.098 passed 00:06:08.098 Test: generate copy: buffer alignment validate ...passed 00:06:08.098 00:06:08.098 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.098 suites 1 1 n/a 0 0 00:06:08.098 tests 20 20 20 0 0 00:06:08.098 asserts 204 204 204 0 n/a 00:06:08.098 00:06:08.098 Elapsed time = 0.003 seconds 00:06:08.356 00:06:08.356 real 0m0.581s 00:06:08.356 user 0m0.871s 00:06:08.356 sys 0m0.175s 00:06:08.356 15:29:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.356 15:29:47 -- common/autotest_common.sh@10 -- # set +x 00:06:08.356 ************************************ 00:06:08.356 END TEST accel_dif_functional_tests 00:06:08.356 ************************************ 00:06:08.356 00:06:08.356 real 1m3.228s 00:06:08.356 user 1m11.154s 00:06:08.356 sys 0m7.161s 00:06:08.356 15:29:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.356 15:29:47 -- common/autotest_common.sh@10 -- # set +x 00:06:08.356 ************************************ 00:06:08.356 END TEST accel 00:06:08.356 ************************************ 00:06:08.356 15:29:47 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:08.356 15:29:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.356 15:29:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.356 15:29:47 -- common/autotest_common.sh@10 -- # set +x 00:06:08.356 ************************************ 00:06:08.356 START TEST accel_rpc 00:06:08.356 ************************************ 00:06:08.356 15:29:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:08.356 * Looking for test storage... 00:06:08.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:08.356 15:29:47 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:08.356 15:29:47 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2009527 00:06:08.356 15:29:47 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:08.356 15:29:47 -- accel/accel_rpc.sh@15 -- # waitforlisten 2009527 00:06:08.356 15:29:47 -- common/autotest_common.sh@819 -- # '[' -z 2009527 ']' 00:06:08.356 15:29:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.356 15:29:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:08.356 15:29:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.356 15:29:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:08.356 15:29:47 -- common/autotest_common.sh@10 -- # set +x 00:06:08.356 [2024-07-10 15:29:47.680679] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:08.356 [2024-07-10 15:29:47.680771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2009527 ] 00:06:08.356 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.614 [2024-07-10 15:29:47.737557] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.614 [2024-07-10 15:29:47.841946] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:08.614 [2024-07-10 15:29:47.842116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.614 15:29:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.614 15:29:47 -- common/autotest_common.sh@852 -- # return 0 00:06:08.614 15:29:47 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:08.614 15:29:47 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:08.614 15:29:47 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:08.614 15:29:47 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:08.614 15:29:47 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:08.614 15:29:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:08.614 15:29:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:08.614 15:29:47 -- common/autotest_common.sh@10 -- # set +x 00:06:08.614 ************************************ 00:06:08.614 START TEST accel_assign_opcode 00:06:08.614 ************************************ 00:06:08.614 15:29:47 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:06:08.614 15:29:47 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:08.614 15:29:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.614 15:29:47 -- common/autotest_common.sh@10 -- # set +x 00:06:08.614 [2024-07-10 15:29:47.890633] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:08.614 15:29:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.614 15:29:47 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:08.614 15:29:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.614 15:29:47 -- common/autotest_common.sh@10 -- # set +x 00:06:08.614 [2024-07-10 15:29:47.898651] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:08.614 15:29:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.614 15:29:47 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:08.614 15:29:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.614 15:29:47 -- common/autotest_common.sh@10 -- # set +x 00:06:08.872 15:29:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.872 15:29:48 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:08.872 15:29:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:08.872 15:29:48 -- common/autotest_common.sh@10 -- # set +x 00:06:08.872 15:29:48 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:08.872 15:29:48 -- accel/accel_rpc.sh@42 -- # grep software 00:06:08.872 15:29:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:08.872 software 00:06:08.872 00:06:08.872 real 0m0.305s 00:06:08.872 user 0m0.036s 00:06:08.872 sys 0m0.011s 00:06:08.872 15:29:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.872 15:29:48 -- common/autotest_common.sh@10 -- # set +x 00:06:08.872 ************************************ 00:06:08.872 END TEST accel_assign_opcode 00:06:08.872 ************************************ 00:06:08.872 15:29:48 -- accel/accel_rpc.sh@55 -- # killprocess 2009527 00:06:08.872 15:29:48 -- common/autotest_common.sh@926 -- # '[' -z 2009527 ']' 00:06:08.872 15:29:48 -- common/autotest_common.sh@930 -- # kill -0 2009527 00:06:08.872 15:29:48 -- common/autotest_common.sh@931 -- # uname 00:06:08.872 15:29:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:08.872 15:29:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2009527 00:06:08.872 15:29:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:08.872 15:29:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:08.872 15:29:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2009527' 00:06:08.872 killing process with pid 2009527 00:06:08.872 15:29:48 -- common/autotest_common.sh@945 -- # kill 2009527 00:06:08.872 15:29:48 -- common/autotest_common.sh@950 -- # wait 2009527 00:06:09.439 00:06:09.439 real 0m1.120s 00:06:09.439 user 0m1.024s 00:06:09.439 sys 0m0.426s 00:06:09.439 15:29:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.439 15:29:48 -- common/autotest_common.sh@10 -- # set +x 00:06:09.439 ************************************ 00:06:09.439 END TEST accel_rpc 00:06:09.439 ************************************ 00:06:09.439 15:29:48 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:09.439 15:29:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:09.439 15:29:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.439 15:29:48 -- common/autotest_common.sh@10 -- # set +x 00:06:09.439 ************************************ 00:06:09.439 START TEST app_cmdline 00:06:09.439 ************************************ 00:06:09.439 15:29:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:09.439 * Looking for test storage... 00:06:09.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:09.439 15:29:48 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:09.439 15:29:48 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2009732 00:06:09.439 15:29:48 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:09.439 15:29:48 -- app/cmdline.sh@18 -- # waitforlisten 2009732 00:06:09.439 15:29:48 -- common/autotest_common.sh@819 -- # '[' -z 2009732 ']' 00:06:09.439 15:29:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.439 15:29:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:09.439 15:29:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.439 15:29:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:09.439 15:29:48 -- common/autotest_common.sh@10 -- # set +x 00:06:09.698 [2024-07-10 15:29:48.831841] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:09.698 [2024-07-10 15:29:48.831919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2009732 ] 00:06:09.698 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.698 [2024-07-10 15:29:48.887794] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.698 [2024-07-10 15:29:48.993914] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:09.698 [2024-07-10 15:29:48.994059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.632 15:29:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:10.632 15:29:49 -- common/autotest_common.sh@852 -- # return 0 00:06:10.632 15:29:49 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:10.890 { 00:06:10.890 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:06:10.890 "fields": { 00:06:10.890 "major": 24, 00:06:10.890 "minor": 1, 00:06:10.890 "patch": 1, 00:06:10.890 "suffix": "-pre", 00:06:10.890 "commit": "4b94202c6" 00:06:10.890 } 00:06:10.890 } 00:06:10.890 15:29:50 -- app/cmdline.sh@22 -- # expected_methods=() 00:06:10.890 15:29:50 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:10.890 15:29:50 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:10.890 15:29:50 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:10.890 15:29:50 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:10.890 15:29:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:10.890 15:29:50 -- common/autotest_common.sh@10 -- # set +x 00:06:10.890 15:29:50 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:10.890 15:29:50 -- app/cmdline.sh@26 -- # sort 00:06:10.890 15:29:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:10.890 15:29:50 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:10.890 15:29:50 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:10.890 15:29:50 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.890 15:29:50 -- common/autotest_common.sh@640 -- # local es=0 00:06:10.890 15:29:50 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:10.890 15:29:50 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:10.890 15:29:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:10.890 15:29:50 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:10.890 15:29:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:10.890 15:29:50 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:10.890 15:29:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:10.890 15:29:50 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:10.890 15:29:50 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:10.890 15:29:50 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:11.148 request: 00:06:11.148 { 00:06:11.148 "method": "env_dpdk_get_mem_stats", 00:06:11.148 "req_id": 1 00:06:11.149 } 00:06:11.149 Got JSON-RPC error response 00:06:11.149 response: 00:06:11.149 { 00:06:11.149 "code": -32601, 00:06:11.149 "message": "Method not found" 00:06:11.149 } 00:06:11.149 15:29:50 -- common/autotest_common.sh@643 -- # es=1 00:06:11.149 15:29:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:11.149 15:29:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:11.149 15:29:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:11.149 15:29:50 -- app/cmdline.sh@1 -- # killprocess 2009732 00:06:11.149 15:29:50 -- common/autotest_common.sh@926 -- # '[' -z 2009732 ']' 00:06:11.149 15:29:50 -- common/autotest_common.sh@930 -- # kill -0 2009732 00:06:11.149 15:29:50 -- common/autotest_common.sh@931 -- # uname 00:06:11.149 15:29:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:11.149 15:29:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2009732 00:06:11.149 15:29:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:11.149 15:29:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:11.149 15:29:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2009732' 00:06:11.149 killing process with pid 2009732 00:06:11.149 15:29:50 -- common/autotest_common.sh@945 -- # kill 2009732 00:06:11.149 15:29:50 -- common/autotest_common.sh@950 -- # wait 2009732 00:06:11.715 00:06:11.715 real 0m2.162s 00:06:11.715 user 0m2.773s 00:06:11.715 sys 0m0.488s 00:06:11.715 15:29:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.715 15:29:50 -- common/autotest_common.sh@10 -- # set +x 00:06:11.715 ************************************ 00:06:11.715 END TEST app_cmdline 00:06:11.715 ************************************ 00:06:11.715 15:29:50 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:11.715 15:29:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:11.715 15:29:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.715 15:29:50 -- common/autotest_common.sh@10 -- # set +x 00:06:11.715 ************************************ 00:06:11.715 START TEST version 00:06:11.715 ************************************ 00:06:11.715 15:29:50 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:11.715 * Looking for test storage... 00:06:11.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:11.715 15:29:50 -- app/version.sh@17 -- # get_header_version major 00:06:11.715 15:29:50 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:11.715 15:29:50 -- app/version.sh@14 -- # cut -f2 00:06:11.715 15:29:50 -- app/version.sh@14 -- # tr -d '"' 00:06:11.715 15:29:50 -- app/version.sh@17 -- # major=24 00:06:11.715 15:29:50 -- app/version.sh@18 -- # get_header_version minor 00:06:11.715 15:29:50 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:11.715 15:29:50 -- app/version.sh@14 -- # cut -f2 00:06:11.715 15:29:50 -- app/version.sh@14 -- # tr -d '"' 00:06:11.715 15:29:50 -- app/version.sh@18 -- # minor=1 00:06:11.715 15:29:50 -- app/version.sh@19 -- # get_header_version patch 00:06:11.715 15:29:50 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:11.715 15:29:50 -- app/version.sh@14 -- # cut -f2 00:06:11.715 15:29:50 -- app/version.sh@14 -- # tr -d '"' 00:06:11.715 15:29:50 -- app/version.sh@19 -- # patch=1 00:06:11.715 15:29:50 -- app/version.sh@20 -- # get_header_version suffix 00:06:11.715 15:29:50 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:11.715 15:29:50 -- app/version.sh@14 -- # cut -f2 00:06:11.715 15:29:50 -- app/version.sh@14 -- # tr -d '"' 00:06:11.715 15:29:50 -- app/version.sh@20 -- # suffix=-pre 00:06:11.715 15:29:50 -- app/version.sh@22 -- # version=24.1 00:06:11.715 15:29:50 -- app/version.sh@25 -- # (( patch != 0 )) 00:06:11.715 15:29:50 -- app/version.sh@25 -- # version=24.1.1 00:06:11.715 15:29:50 -- app/version.sh@28 -- # version=24.1.1rc0 00:06:11.715 15:29:50 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:11.715 15:29:50 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:11.715 15:29:51 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:06:11.715 15:29:51 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:06:11.715 00:06:11.715 real 0m0.106s 00:06:11.715 user 0m0.061s 00:06:11.715 sys 0m0.067s 00:06:11.715 15:29:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.715 15:29:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.715 ************************************ 00:06:11.715 END TEST version 00:06:11.715 ************************************ 00:06:11.715 15:29:51 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:06:11.715 15:29:51 -- spdk/autotest.sh@204 -- # uname -s 00:06:11.715 15:29:51 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:06:11.715 15:29:51 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:11.715 15:29:51 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:06:11.715 15:29:51 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:06:11.715 15:29:51 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:06:11.715 15:29:51 -- spdk/autotest.sh@268 -- # timing_exit lib 00:06:11.715 15:29:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:11.715 15:29:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.715 15:29:51 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:11.715 15:29:51 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:06:11.715 15:29:51 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:06:11.715 15:29:51 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:06:11.715 15:29:51 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:06:11.715 15:29:51 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:06:11.715 15:29:51 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:11.715 15:29:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:11.715 15:29:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.715 15:29:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.715 ************************************ 00:06:11.715 START TEST nvmf_tcp 00:06:11.715 ************************************ 00:06:11.715 15:29:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:11.973 * Looking for test storage... 00:06:11.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:11.973 15:29:51 -- nvmf/nvmf.sh@10 -- # uname -s 00:06:11.973 15:29:51 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:11.973 15:29:51 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.973 15:29:51 -- nvmf/common.sh@7 -- # uname -s 00:06:11.973 15:29:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.973 15:29:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.973 15:29:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.974 15:29:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.974 15:29:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.974 15:29:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.974 15:29:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.974 15:29:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.974 15:29:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.974 15:29:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.974 15:29:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:11.974 15:29:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:11.974 15:29:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.974 15:29:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.974 15:29:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:11.974 15:29:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:11.974 15:29:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.974 15:29:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.974 15:29:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.974 15:29:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.974 15:29:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.974 15:29:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.974 15:29:51 -- paths/export.sh@5 -- # export PATH 00:06:11.974 15:29:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.974 15:29:51 -- nvmf/common.sh@46 -- # : 0 00:06:11.974 15:29:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:11.974 15:29:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:11.974 15:29:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:11.974 15:29:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.974 15:29:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.974 15:29:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:11.974 15:29:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:11.974 15:29:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:11.974 15:29:51 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:11.974 15:29:51 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:06:11.974 15:29:51 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:06:11.974 15:29:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:11.974 15:29:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.974 15:29:51 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:06:11.974 15:29:51 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:11.974 15:29:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:11.974 15:29:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:11.974 15:29:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.974 ************************************ 00:06:11.974 START TEST nvmf_example 00:06:11.974 ************************************ 00:06:11.974 15:29:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:06:11.974 * Looking for test storage... 00:06:11.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:11.974 15:29:51 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.974 15:29:51 -- nvmf/common.sh@7 -- # uname -s 00:06:11.974 15:29:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.974 15:29:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.974 15:29:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.974 15:29:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.974 15:29:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.974 15:29:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.974 15:29:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.974 15:29:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.974 15:29:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.974 15:29:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.974 15:29:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:11.974 15:29:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:11.974 15:29:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.974 15:29:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.974 15:29:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:11.974 15:29:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:11.974 15:29:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.974 15:29:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.974 15:29:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.974 15:29:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.974 15:29:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.974 15:29:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.974 15:29:51 -- paths/export.sh@5 -- # export PATH 00:06:11.974 15:29:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.974 15:29:51 -- nvmf/common.sh@46 -- # : 0 00:06:11.974 15:29:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:11.974 15:29:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:11.974 15:29:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:11.974 15:29:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.974 15:29:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.974 15:29:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:11.974 15:29:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:11.974 15:29:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:11.974 15:29:51 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:06:11.974 15:29:51 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:06:11.974 15:29:51 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:06:11.974 15:29:51 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:06:11.974 15:29:51 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:06:11.974 15:29:51 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:06:11.974 15:29:51 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:06:11.974 15:29:51 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:06:11.974 15:29:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:11.974 15:29:51 -- common/autotest_common.sh@10 -- # set +x 00:06:11.974 15:29:51 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:06:11.974 15:29:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:11.974 15:29:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:11.974 15:29:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:11.974 15:29:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:11.974 15:29:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:11.974 15:29:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:11.974 15:29:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:11.974 15:29:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:11.974 15:29:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:11.974 15:29:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:11.974 15:29:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:11.974 15:29:51 -- common/autotest_common.sh@10 -- # set +x 00:06:14.505 15:29:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:14.505 15:29:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:14.505 15:29:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:14.505 15:29:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:14.505 15:29:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:14.505 15:29:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:14.505 15:29:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:14.505 15:29:53 -- nvmf/common.sh@294 -- # net_devs=() 00:06:14.505 15:29:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:14.505 15:29:53 -- nvmf/common.sh@295 -- # e810=() 00:06:14.505 15:29:53 -- nvmf/common.sh@295 -- # local -ga e810 00:06:14.505 15:29:53 -- nvmf/common.sh@296 -- # x722=() 00:06:14.505 15:29:53 -- nvmf/common.sh@296 -- # local -ga x722 00:06:14.505 15:29:53 -- nvmf/common.sh@297 -- # mlx=() 00:06:14.505 15:29:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:14.505 15:29:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:14.505 15:29:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:14.505 15:29:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:14.505 15:29:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:14.505 15:29:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:14.505 15:29:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:14.505 15:29:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:14.505 15:29:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:14.505 15:29:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:14.505 15:29:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:14.505 15:29:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:14.505 15:29:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:14.505 15:29:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:14.505 15:29:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:14.505 15:29:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:14.505 15:29:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:14.505 15:29:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:14.505 15:29:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:14.505 15:29:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:14.505 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:14.505 15:29:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:14.505 15:29:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:14.505 15:29:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.505 15:29:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.505 15:29:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:14.505 15:29:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:14.505 15:29:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:14.505 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:14.505 15:29:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:14.505 15:29:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:14.505 15:29:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:14.505 15:29:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:14.505 15:29:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:14.505 15:29:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:14.505 15:29:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:14.505 15:29:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:14.505 15:29:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:14.505 15:29:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.505 15:29:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:14.505 15:29:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.505 15:29:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:14.505 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:14.505 15:29:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.505 15:29:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:14.505 15:29:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:14.505 15:29:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:14.505 15:29:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:14.505 15:29:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:14.505 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:14.505 15:29:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:14.505 15:29:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:14.505 15:29:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:14.505 15:29:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:14.505 15:29:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:14.505 15:29:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:14.505 15:29:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:14.505 15:29:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:14.505 15:29:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:14.505 15:29:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:14.505 15:29:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:14.505 15:29:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:14.505 15:29:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:14.505 15:29:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:14.505 15:29:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:14.505 15:29:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:14.505 15:29:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:14.505 15:29:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:14.505 15:29:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:14.505 15:29:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:14.505 15:29:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:14.505 15:29:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:14.505 15:29:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:14.505 15:29:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:14.505 15:29:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:14.505 15:29:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:14.505 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:14.505 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:06:14.505 00:06:14.505 --- 10.0.0.2 ping statistics --- 00:06:14.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.505 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:06:14.505 15:29:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:14.505 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:14.505 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:06:14.505 00:06:14.505 --- 10.0.0.1 ping statistics --- 00:06:14.505 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:14.505 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:06:14.505 15:29:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:14.505 15:29:53 -- nvmf/common.sh@410 -- # return 0 00:06:14.505 15:29:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:14.505 15:29:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:14.505 15:29:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:14.505 15:29:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:14.505 15:29:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:14.505 15:29:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:14.505 15:29:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:14.505 15:29:53 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:06:14.505 15:29:53 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:06:14.505 15:29:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:14.505 15:29:53 -- common/autotest_common.sh@10 -- # set +x 00:06:14.505 15:29:53 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:06:14.505 15:29:53 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:06:14.505 15:29:53 -- target/nvmf_example.sh@34 -- # nvmfpid=2011771 00:06:14.505 15:29:53 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:06:14.505 15:29:53 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:14.505 15:29:53 -- target/nvmf_example.sh@36 -- # waitforlisten 2011771 00:06:14.505 15:29:53 -- common/autotest_common.sh@819 -- # '[' -z 2011771 ']' 00:06:14.505 15:29:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.505 15:29:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:14.505 15:29:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.505 15:29:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:14.505 15:29:53 -- common/autotest_common.sh@10 -- # set +x 00:06:14.505 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.069 15:29:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:15.069 15:29:54 -- common/autotest_common.sh@852 -- # return 0 00:06:15.069 15:29:54 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:06:15.069 15:29:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:15.069 15:29:54 -- common/autotest_common.sh@10 -- # set +x 00:06:15.327 15:29:54 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:15.327 15:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.327 15:29:54 -- common/autotest_common.sh@10 -- # set +x 00:06:15.327 15:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.327 15:29:54 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:06:15.327 15:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.327 15:29:54 -- common/autotest_common.sh@10 -- # set +x 00:06:15.327 15:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.327 15:29:54 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:06:15.327 15:29:54 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:15.327 15:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.327 15:29:54 -- common/autotest_common.sh@10 -- # set +x 00:06:15.327 15:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.327 15:29:54 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:06:15.327 15:29:54 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:06:15.327 15:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.327 15:29:54 -- common/autotest_common.sh@10 -- # set +x 00:06:15.327 15:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.327 15:29:54 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:15.327 15:29:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.327 15:29:54 -- common/autotest_common.sh@10 -- # set +x 00:06:15.327 15:29:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.327 15:29:54 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:06:15.327 15:29:54 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:06:15.327 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.348 Initializing NVMe Controllers 00:06:25.348 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:25.348 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:25.348 Initialization complete. Launching workers. 00:06:25.348 ======================================================== 00:06:25.348 Latency(us) 00:06:25.348 Device Information : IOPS MiB/s Average min max 00:06:25.348 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15796.43 61.70 4051.29 780.58 15235.15 00:06:25.348 ======================================================== 00:06:25.348 Total : 15796.43 61.70 4051.29 780.58 15235.15 00:06:25.348 00:06:25.348 15:30:04 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:06:25.348 15:30:04 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:06:25.348 15:30:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:25.348 15:30:04 -- nvmf/common.sh@116 -- # sync 00:06:25.348 15:30:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:25.348 15:30:04 -- nvmf/common.sh@119 -- # set +e 00:06:25.348 15:30:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:25.348 15:30:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:25.348 rmmod nvme_tcp 00:06:25.348 rmmod nvme_fabrics 00:06:25.606 rmmod nvme_keyring 00:06:25.606 15:30:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:25.606 15:30:04 -- nvmf/common.sh@123 -- # set -e 00:06:25.606 15:30:04 -- nvmf/common.sh@124 -- # return 0 00:06:25.606 15:30:04 -- nvmf/common.sh@477 -- # '[' -n 2011771 ']' 00:06:25.606 15:30:04 -- nvmf/common.sh@478 -- # killprocess 2011771 00:06:25.606 15:30:04 -- common/autotest_common.sh@926 -- # '[' -z 2011771 ']' 00:06:25.606 15:30:04 -- common/autotest_common.sh@930 -- # kill -0 2011771 00:06:25.606 15:30:04 -- common/autotest_common.sh@931 -- # uname 00:06:25.606 15:30:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:25.606 15:30:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2011771 00:06:25.606 15:30:04 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:06:25.606 15:30:04 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:06:25.606 15:30:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2011771' 00:06:25.606 killing process with pid 2011771 00:06:25.606 15:30:04 -- common/autotest_common.sh@945 -- # kill 2011771 00:06:25.606 15:30:04 -- common/autotest_common.sh@950 -- # wait 2011771 00:06:25.874 nvmf threads initialize successfully 00:06:25.874 bdev subsystem init successfully 00:06:25.874 created a nvmf target service 00:06:25.874 create targets's poll groups done 00:06:25.874 all subsystems of target started 00:06:25.874 nvmf target is running 00:06:25.874 all subsystems of target stopped 00:06:25.874 destroy targets's poll groups done 00:06:25.874 destroyed the nvmf target service 00:06:25.874 bdev subsystem finish successfully 00:06:25.874 nvmf threads destroy successfully 00:06:25.874 15:30:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:25.874 15:30:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:25.874 15:30:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:25.874 15:30:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:25.874 15:30:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:25.874 15:30:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:25.874 15:30:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:25.874 15:30:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:27.780 15:30:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:06:27.780 15:30:07 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:06:27.780 15:30:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:27.780 15:30:07 -- common/autotest_common.sh@10 -- # set +x 00:06:27.780 00:06:27.780 real 0m15.926s 00:06:27.780 user 0m45.049s 00:06:27.780 sys 0m3.194s 00:06:27.780 15:30:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:27.780 15:30:07 -- common/autotest_common.sh@10 -- # set +x 00:06:27.780 ************************************ 00:06:27.780 END TEST nvmf_example 00:06:27.780 ************************************ 00:06:27.780 15:30:07 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:27.780 15:30:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:27.780 15:30:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:27.780 15:30:07 -- common/autotest_common.sh@10 -- # set +x 00:06:27.780 ************************************ 00:06:27.780 START TEST nvmf_filesystem 00:06:27.780 ************************************ 00:06:27.780 15:30:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:06:27.780 * Looking for test storage... 00:06:27.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:27.780 15:30:07 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:06:27.780 15:30:07 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:06:27.780 15:30:07 -- common/autotest_common.sh@34 -- # set -e 00:06:27.780 15:30:07 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:06:27.780 15:30:07 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:06:27.780 15:30:07 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:06:27.780 15:30:07 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:06:27.780 15:30:07 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:06:27.780 15:30:07 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:06:27.780 15:30:07 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:06:27.780 15:30:07 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:06:27.780 15:30:07 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:06:27.780 15:30:07 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:06:27.780 15:30:07 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:06:27.780 15:30:07 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:06:27.780 15:30:07 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:06:27.780 15:30:07 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:06:27.780 15:30:07 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:06:27.780 15:30:07 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:06:27.780 15:30:07 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:06:27.780 15:30:07 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:06:27.780 15:30:07 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:06:27.780 15:30:07 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:06:27.780 15:30:07 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:06:27.780 15:30:07 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:06:27.780 15:30:07 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:27.780 15:30:07 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:06:27.780 15:30:07 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:06:27.780 15:30:07 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:06:27.780 15:30:07 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:06:27.780 15:30:07 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:06:27.780 15:30:07 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:06:27.780 15:30:07 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:06:27.780 15:30:07 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:06:27.780 15:30:07 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:06:27.780 15:30:07 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:06:27.780 15:30:07 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:06:27.780 15:30:07 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:06:27.780 15:30:07 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:06:27.780 15:30:07 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:06:27.780 15:30:07 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:06:27.780 15:30:07 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:06:27.780 15:30:07 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:27.780 15:30:07 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:06:27.780 15:30:07 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:06:27.780 15:30:07 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:06:27.780 15:30:07 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:06:27.780 15:30:07 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:06:27.780 15:30:07 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:06:27.780 15:30:07 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:06:27.780 15:30:07 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:06:27.780 15:30:07 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:06:27.780 15:30:07 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:06:27.780 15:30:07 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:06:27.780 15:30:07 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:06:27.780 15:30:07 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:06:27.780 15:30:07 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:06:27.780 15:30:07 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:06:27.780 15:30:07 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:06:27.780 15:30:07 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:06:27.780 15:30:07 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:06:27.780 15:30:07 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:06:27.780 15:30:07 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:06:27.780 15:30:07 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:06:27.780 15:30:07 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:06:27.780 15:30:07 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:06:27.780 15:30:07 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:06:27.780 15:30:07 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:06:27.780 15:30:07 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:06:27.780 15:30:07 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:06:27.780 15:30:07 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:06:27.780 15:30:07 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:06:27.780 15:30:07 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:06:27.780 15:30:07 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:06:27.780 15:30:07 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:06:27.780 15:30:07 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:06:27.780 15:30:07 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:06:27.780 15:30:07 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:06:27.780 15:30:07 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:06:27.780 15:30:07 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:06:27.780 15:30:07 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:06:27.780 15:30:07 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:06:27.781 15:30:07 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:06:27.781 15:30:07 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:06:28.040 15:30:07 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:06:28.040 15:30:07 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:06:28.040 15:30:07 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:28.040 15:30:07 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:06:28.040 15:30:07 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:28.040 15:30:07 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:06:28.040 15:30:07 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:28.040 15:30:07 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:28.040 15:30:07 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:28.040 15:30:07 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:28.040 15:30:07 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:06:28.040 15:30:07 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:06:28.040 15:30:07 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:06:28.040 15:30:07 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:06:28.040 15:30:07 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:06:28.040 15:30:07 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:06:28.040 15:30:07 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:06:28.040 15:30:07 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:06:28.040 #define SPDK_CONFIG_H 00:06:28.040 #define SPDK_CONFIG_APPS 1 00:06:28.040 #define SPDK_CONFIG_ARCH native 00:06:28.040 #undef SPDK_CONFIG_ASAN 00:06:28.040 #undef SPDK_CONFIG_AVAHI 00:06:28.040 #undef SPDK_CONFIG_CET 00:06:28.040 #define SPDK_CONFIG_COVERAGE 1 00:06:28.040 #define SPDK_CONFIG_CROSS_PREFIX 00:06:28.040 #undef SPDK_CONFIG_CRYPTO 00:06:28.040 #undef SPDK_CONFIG_CRYPTO_MLX5 00:06:28.040 #undef SPDK_CONFIG_CUSTOMOCF 00:06:28.040 #undef SPDK_CONFIG_DAOS 00:06:28.040 #define SPDK_CONFIG_DAOS_DIR 00:06:28.040 #define SPDK_CONFIG_DEBUG 1 00:06:28.040 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:06:28.040 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:06:28.041 #define SPDK_CONFIG_DPDK_INC_DIR 00:06:28.041 #define SPDK_CONFIG_DPDK_LIB_DIR 00:06:28.041 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:06:28.041 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:06:28.041 #define SPDK_CONFIG_EXAMPLES 1 00:06:28.041 #undef SPDK_CONFIG_FC 00:06:28.041 #define SPDK_CONFIG_FC_PATH 00:06:28.041 #define SPDK_CONFIG_FIO_PLUGIN 1 00:06:28.041 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:06:28.041 #undef SPDK_CONFIG_FUSE 00:06:28.041 #undef SPDK_CONFIG_FUZZER 00:06:28.041 #define SPDK_CONFIG_FUZZER_LIB 00:06:28.041 #undef SPDK_CONFIG_GOLANG 00:06:28.041 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:06:28.041 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:06:28.041 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:06:28.041 #undef SPDK_CONFIG_HAVE_LIBBSD 00:06:28.041 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:06:28.041 #define SPDK_CONFIG_IDXD 1 00:06:28.041 #define SPDK_CONFIG_IDXD_KERNEL 1 00:06:28.041 #undef SPDK_CONFIG_IPSEC_MB 00:06:28.041 #define SPDK_CONFIG_IPSEC_MB_DIR 00:06:28.041 #define SPDK_CONFIG_ISAL 1 00:06:28.041 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:06:28.041 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:06:28.041 #define SPDK_CONFIG_LIBDIR 00:06:28.041 #undef SPDK_CONFIG_LTO 00:06:28.041 #define SPDK_CONFIG_MAX_LCORES 00:06:28.041 #define SPDK_CONFIG_NVME_CUSE 1 00:06:28.041 #undef SPDK_CONFIG_OCF 00:06:28.041 #define SPDK_CONFIG_OCF_PATH 00:06:28.041 #define SPDK_CONFIG_OPENSSL_PATH 00:06:28.041 #undef SPDK_CONFIG_PGO_CAPTURE 00:06:28.041 #undef SPDK_CONFIG_PGO_USE 00:06:28.041 #define SPDK_CONFIG_PREFIX /usr/local 00:06:28.041 #undef SPDK_CONFIG_RAID5F 00:06:28.041 #undef SPDK_CONFIG_RBD 00:06:28.041 #define SPDK_CONFIG_RDMA 1 00:06:28.041 #define SPDK_CONFIG_RDMA_PROV verbs 00:06:28.041 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:06:28.041 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:06:28.041 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:06:28.041 #define SPDK_CONFIG_SHARED 1 00:06:28.041 #undef SPDK_CONFIG_SMA 00:06:28.041 #define SPDK_CONFIG_TESTS 1 00:06:28.041 #undef SPDK_CONFIG_TSAN 00:06:28.041 #define SPDK_CONFIG_UBLK 1 00:06:28.041 #define SPDK_CONFIG_UBSAN 1 00:06:28.041 #undef SPDK_CONFIG_UNIT_TESTS 00:06:28.041 #undef SPDK_CONFIG_URING 00:06:28.041 #define SPDK_CONFIG_URING_PATH 00:06:28.041 #undef SPDK_CONFIG_URING_ZNS 00:06:28.041 #undef SPDK_CONFIG_USDT 00:06:28.041 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:06:28.041 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:06:28.041 #undef SPDK_CONFIG_VFIO_USER 00:06:28.041 #define SPDK_CONFIG_VFIO_USER_DIR 00:06:28.041 #define SPDK_CONFIG_VHOST 1 00:06:28.041 #define SPDK_CONFIG_VIRTIO 1 00:06:28.041 #undef SPDK_CONFIG_VTUNE 00:06:28.041 #define SPDK_CONFIG_VTUNE_DIR 00:06:28.041 #define SPDK_CONFIG_WERROR 1 00:06:28.041 #define SPDK_CONFIG_WPDK_DIR 00:06:28.041 #undef SPDK_CONFIG_XNVME 00:06:28.041 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:06:28.041 15:30:07 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:06:28.041 15:30:07 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:28.041 15:30:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.041 15:30:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.041 15:30:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.041 15:30:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.041 15:30:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.041 15:30:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.041 15:30:07 -- paths/export.sh@5 -- # export PATH 00:06:28.041 15:30:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.041 15:30:07 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:28.041 15:30:07 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:06:28.041 15:30:07 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:28.041 15:30:07 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:06:28.041 15:30:07 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:06:28.041 15:30:07 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:28.041 15:30:07 -- pm/common@16 -- # TEST_TAG=N/A 00:06:28.041 15:30:07 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:06:28.041 15:30:07 -- common/autotest_common.sh@52 -- # : 1 00:06:28.041 15:30:07 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:06:28.041 15:30:07 -- common/autotest_common.sh@56 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:06:28.041 15:30:07 -- common/autotest_common.sh@58 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:06:28.041 15:30:07 -- common/autotest_common.sh@60 -- # : 1 00:06:28.041 15:30:07 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:06:28.041 15:30:07 -- common/autotest_common.sh@62 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:06:28.041 15:30:07 -- common/autotest_common.sh@64 -- # : 00:06:28.041 15:30:07 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:06:28.041 15:30:07 -- common/autotest_common.sh@66 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:06:28.041 15:30:07 -- common/autotest_common.sh@68 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:06:28.041 15:30:07 -- common/autotest_common.sh@70 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:06:28.041 15:30:07 -- common/autotest_common.sh@72 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:06:28.041 15:30:07 -- common/autotest_common.sh@74 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:06:28.041 15:30:07 -- common/autotest_common.sh@76 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:06:28.041 15:30:07 -- common/autotest_common.sh@78 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:06:28.041 15:30:07 -- common/autotest_common.sh@80 -- # : 1 00:06:28.041 15:30:07 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:06:28.041 15:30:07 -- common/autotest_common.sh@82 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:06:28.041 15:30:07 -- common/autotest_common.sh@84 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:06:28.041 15:30:07 -- common/autotest_common.sh@86 -- # : 1 00:06:28.041 15:30:07 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:06:28.041 15:30:07 -- common/autotest_common.sh@88 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:06:28.041 15:30:07 -- common/autotest_common.sh@90 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:06:28.041 15:30:07 -- common/autotest_common.sh@92 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:06:28.041 15:30:07 -- common/autotest_common.sh@94 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:06:28.041 15:30:07 -- common/autotest_common.sh@96 -- # : tcp 00:06:28.041 15:30:07 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:06:28.041 15:30:07 -- common/autotest_common.sh@98 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:06:28.041 15:30:07 -- common/autotest_common.sh@100 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:06:28.041 15:30:07 -- common/autotest_common.sh@102 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:06:28.041 15:30:07 -- common/autotest_common.sh@104 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:06:28.041 15:30:07 -- common/autotest_common.sh@106 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:06:28.041 15:30:07 -- common/autotest_common.sh@108 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:06:28.041 15:30:07 -- common/autotest_common.sh@110 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:06:28.041 15:30:07 -- common/autotest_common.sh@112 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:06:28.041 15:30:07 -- common/autotest_common.sh@114 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:06:28.041 15:30:07 -- common/autotest_common.sh@116 -- # : 1 00:06:28.041 15:30:07 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:06:28.041 15:30:07 -- common/autotest_common.sh@118 -- # : 00:06:28.041 15:30:07 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:06:28.041 15:30:07 -- common/autotest_common.sh@120 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:06:28.041 15:30:07 -- common/autotest_common.sh@122 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:06:28.041 15:30:07 -- common/autotest_common.sh@124 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:06:28.041 15:30:07 -- common/autotest_common.sh@126 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:06:28.041 15:30:07 -- common/autotest_common.sh@128 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:06:28.041 15:30:07 -- common/autotest_common.sh@130 -- # : 0 00:06:28.041 15:30:07 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:06:28.042 15:30:07 -- common/autotest_common.sh@132 -- # : 00:06:28.042 15:30:07 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:06:28.042 15:30:07 -- common/autotest_common.sh@134 -- # : true 00:06:28.042 15:30:07 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:06:28.042 15:30:07 -- common/autotest_common.sh@136 -- # : 0 00:06:28.042 15:30:07 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:06:28.042 15:30:07 -- common/autotest_common.sh@138 -- # : 0 00:06:28.042 15:30:07 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:06:28.042 15:30:07 -- common/autotest_common.sh@140 -- # : 0 00:06:28.042 15:30:07 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:06:28.042 15:30:07 -- common/autotest_common.sh@142 -- # : 0 00:06:28.042 15:30:07 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:06:28.042 15:30:07 -- common/autotest_common.sh@144 -- # : 0 00:06:28.042 15:30:07 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:06:28.042 15:30:07 -- common/autotest_common.sh@146 -- # : 0 00:06:28.042 15:30:07 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:06:28.042 15:30:07 -- common/autotest_common.sh@148 -- # : e810 00:06:28.042 15:30:07 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:06:28.042 15:30:07 -- common/autotest_common.sh@150 -- # : 0 00:06:28.042 15:30:07 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:06:28.042 15:30:07 -- common/autotest_common.sh@152 -- # : 0 00:06:28.042 15:30:07 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:06:28.042 15:30:07 -- common/autotest_common.sh@154 -- # : 0 00:06:28.042 15:30:07 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:06:28.042 15:30:07 -- common/autotest_common.sh@156 -- # : 0 00:06:28.042 15:30:07 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:06:28.042 15:30:07 -- common/autotest_common.sh@158 -- # : 0 00:06:28.042 15:30:07 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:06:28.042 15:30:07 -- common/autotest_common.sh@160 -- # : 0 00:06:28.042 15:30:07 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:06:28.042 15:30:07 -- common/autotest_common.sh@163 -- # : 00:06:28.042 15:30:07 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:06:28.042 15:30:07 -- common/autotest_common.sh@165 -- # : 0 00:06:28.042 15:30:07 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:06:28.042 15:30:07 -- common/autotest_common.sh@167 -- # : 0 00:06:28.042 15:30:07 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:06:28.042 15:30:07 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:28.042 15:30:07 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:06:28.042 15:30:07 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:28.042 15:30:07 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:06:28.042 15:30:07 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:28.042 15:30:07 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:28.042 15:30:07 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:28.042 15:30:07 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:06:28.042 15:30:07 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:06:28.042 15:30:07 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:06:28.042 15:30:07 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:28.042 15:30:07 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:28.042 15:30:07 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:06:28.042 15:30:07 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:06:28.042 15:30:07 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:28.042 15:30:07 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:06:28.042 15:30:07 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:28.042 15:30:07 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:06:28.042 15:30:07 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:06:28.042 15:30:07 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:06:28.042 15:30:07 -- common/autotest_common.sh@196 -- # cat 00:06:28.042 15:30:07 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:06:28.042 15:30:07 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:28.042 15:30:07 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:06:28.042 15:30:07 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:28.042 15:30:07 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:06:28.042 15:30:07 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:06:28.042 15:30:07 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:06:28.042 15:30:07 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:28.042 15:30:07 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:06:28.042 15:30:07 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:28.042 15:30:07 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:06:28.042 15:30:07 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:28.042 15:30:07 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:06:28.042 15:30:07 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:28.042 15:30:07 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:06:28.042 15:30:07 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:28.042 15:30:07 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:06:28.042 15:30:07 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:28.042 15:30:07 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:06:28.042 15:30:07 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:06:28.042 15:30:07 -- common/autotest_common.sh@249 -- # export valgrind= 00:06:28.042 15:30:07 -- common/autotest_common.sh@249 -- # valgrind= 00:06:28.042 15:30:07 -- common/autotest_common.sh@255 -- # uname -s 00:06:28.042 15:30:07 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:06:28.042 15:30:07 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:06:28.042 15:30:07 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:06:28.042 15:30:07 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:06:28.042 15:30:07 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:06:28.042 15:30:07 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:06:28.042 15:30:07 -- common/autotest_common.sh@265 -- # MAKE=make 00:06:28.042 15:30:07 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j48 00:06:28.042 15:30:07 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:06:28.042 15:30:07 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:06:28.042 15:30:07 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:06:28.042 15:30:07 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:06:28.042 15:30:07 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:06:28.042 15:30:07 -- common/autotest_common.sh@291 -- # for i in "$@" 00:06:28.042 15:30:07 -- common/autotest_common.sh@292 -- # case "$i" in 00:06:28.042 15:30:07 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:06:28.042 15:30:07 -- common/autotest_common.sh@309 -- # [[ -z 2013728 ]] 00:06:28.042 15:30:07 -- common/autotest_common.sh@309 -- # kill -0 2013728 00:06:28.042 15:30:07 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:06:28.042 15:30:07 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:06:28.042 15:30:07 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:06:28.042 15:30:07 -- common/autotest_common.sh@322 -- # local mount target_dir 00:06:28.042 15:30:07 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:06:28.042 15:30:07 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:06:28.042 15:30:07 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:06:28.042 15:30:07 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:06:28.042 15:30:07 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.KFYweR 00:06:28.042 15:30:07 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:06:28.042 15:30:07 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:06:28.042 15:30:07 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:06:28.042 15:30:07 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.KFYweR/tests/target /tmp/spdk.KFYweR 00:06:28.042 15:30:07 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:06:28.042 15:30:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:28.042 15:30:07 -- common/autotest_common.sh@318 -- # df -T 00:06:28.042 15:30:07 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:06:28.042 15:30:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:06:28.042 15:30:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:06:28.042 15:30:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:06:28.043 15:30:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:06:28.043 15:30:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:06:28.043 15:30:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:28.043 15:30:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:06:28.043 15:30:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:06:28.043 15:30:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=953643008 00:06:28.043 15:30:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:06:28.043 15:30:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=4330786816 00:06:28.043 15:30:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:28.043 15:30:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:06:28.043 15:30:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:06:28.043 15:30:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=55625138176 00:06:28.043 15:30:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=61994737664 00:06:28.043 15:30:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=6369599488 00:06:28.043 15:30:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:28.043 15:30:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:28.043 15:30:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:28.043 15:30:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=30943850496 00:06:28.043 15:30:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997368832 00:06:28.043 15:30:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=53518336 00:06:28.043 15:30:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:28.043 15:30:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:28.043 15:30:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:28.043 15:30:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=12390187008 00:06:28.043 15:30:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12398948352 00:06:28.043 15:30:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=8761344 00:06:28.043 15:30:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:28.043 15:30:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:28.043 15:30:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:28.043 15:30:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=30996344832 00:06:28.043 15:30:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997368832 00:06:28.043 15:30:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=1024000 00:06:28.043 15:30:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:28.043 15:30:07 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:06:28.043 15:30:07 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:06:28.043 15:30:07 -- common/autotest_common.sh@353 -- # avails["$mount"]=6199468032 00:06:28.043 15:30:07 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6199472128 00:06:28.043 15:30:07 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:06:28.043 15:30:07 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:06:28.043 15:30:07 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:06:28.043 * Looking for test storage... 00:06:28.043 15:30:07 -- common/autotest_common.sh@359 -- # local target_space new_size 00:06:28.043 15:30:07 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:06:28.043 15:30:07 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:28.043 15:30:07 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:06:28.043 15:30:07 -- common/autotest_common.sh@363 -- # mount=/ 00:06:28.043 15:30:07 -- common/autotest_common.sh@365 -- # target_space=55625138176 00:06:28.043 15:30:07 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:06:28.043 15:30:07 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:06:28.043 15:30:07 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:06:28.043 15:30:07 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:06:28.043 15:30:07 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:06:28.043 15:30:07 -- common/autotest_common.sh@372 -- # new_size=8584192000 00:06:28.043 15:30:07 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:06:28.043 15:30:07 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:28.043 15:30:07 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:28.043 15:30:07 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:28.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:28.043 15:30:07 -- common/autotest_common.sh@380 -- # return 0 00:06:28.043 15:30:07 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:06:28.043 15:30:07 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:06:28.043 15:30:07 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:06:28.043 15:30:07 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:06:28.043 15:30:07 -- common/autotest_common.sh@1672 -- # true 00:06:28.043 15:30:07 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:06:28.043 15:30:07 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:06:28.043 15:30:07 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:06:28.043 15:30:07 -- common/autotest_common.sh@27 -- # exec 00:06:28.043 15:30:07 -- common/autotest_common.sh@29 -- # exec 00:06:28.043 15:30:07 -- common/autotest_common.sh@31 -- # xtrace_restore 00:06:28.043 15:30:07 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:06:28.043 15:30:07 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:06:28.043 15:30:07 -- common/autotest_common.sh@18 -- # set -x 00:06:28.043 15:30:07 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:28.043 15:30:07 -- nvmf/common.sh@7 -- # uname -s 00:06:28.043 15:30:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.043 15:30:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.043 15:30:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.043 15:30:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.043 15:30:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.043 15:30:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.043 15:30:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.043 15:30:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.043 15:30:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.043 15:30:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.043 15:30:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:28.043 15:30:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:28.043 15:30:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.043 15:30:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.043 15:30:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:28.043 15:30:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:28.043 15:30:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.043 15:30:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.043 15:30:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.043 15:30:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.043 15:30:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.043 15:30:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.043 15:30:07 -- paths/export.sh@5 -- # export PATH 00:06:28.043 15:30:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.043 15:30:07 -- nvmf/common.sh@46 -- # : 0 00:06:28.043 15:30:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:28.043 15:30:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:28.043 15:30:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:28.043 15:30:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.043 15:30:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.043 15:30:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:28.043 15:30:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:28.043 15:30:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:28.043 15:30:07 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:06:28.043 15:30:07 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:06:28.043 15:30:07 -- target/filesystem.sh@15 -- # nvmftestinit 00:06:28.043 15:30:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:28.043 15:30:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:28.043 15:30:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:28.043 15:30:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:28.043 15:30:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:28.043 15:30:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.043 15:30:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:28.043 15:30:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:28.043 15:30:07 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:28.043 15:30:07 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:28.043 15:30:07 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:28.043 15:30:07 -- common/autotest_common.sh@10 -- # set +x 00:06:29.943 15:30:09 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:29.943 15:30:09 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:29.943 15:30:09 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:29.943 15:30:09 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:29.943 15:30:09 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:29.943 15:30:09 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:29.943 15:30:09 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:29.943 15:30:09 -- nvmf/common.sh@294 -- # net_devs=() 00:06:29.943 15:30:09 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:29.943 15:30:09 -- nvmf/common.sh@295 -- # e810=() 00:06:29.943 15:30:09 -- nvmf/common.sh@295 -- # local -ga e810 00:06:29.943 15:30:09 -- nvmf/common.sh@296 -- # x722=() 00:06:29.943 15:30:09 -- nvmf/common.sh@296 -- # local -ga x722 00:06:29.943 15:30:09 -- nvmf/common.sh@297 -- # mlx=() 00:06:29.943 15:30:09 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:29.943 15:30:09 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:29.943 15:30:09 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:29.943 15:30:09 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:29.943 15:30:09 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:29.943 15:30:09 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:29.943 15:30:09 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:29.943 15:30:09 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:29.943 15:30:09 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:29.943 15:30:09 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:29.943 15:30:09 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:29.943 15:30:09 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:29.943 15:30:09 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:29.943 15:30:09 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:29.943 15:30:09 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:29.943 15:30:09 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:29.943 15:30:09 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:29.943 15:30:09 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:29.943 15:30:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:29.943 15:30:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:29.943 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:29.943 15:30:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:29.943 15:30:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:29.943 15:30:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.943 15:30:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.943 15:30:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:29.943 15:30:09 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:29.943 15:30:09 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:29.943 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:29.943 15:30:09 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:29.943 15:30:09 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:29.943 15:30:09 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:29.943 15:30:09 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:29.943 15:30:09 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:29.943 15:30:09 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:29.943 15:30:09 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:29.943 15:30:09 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:29.943 15:30:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:29.943 15:30:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.943 15:30:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:29.943 15:30:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.943 15:30:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:29.943 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:29.943 15:30:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.943 15:30:09 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:29.943 15:30:09 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:29.943 15:30:09 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:29.943 15:30:09 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:29.943 15:30:09 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:29.943 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:29.943 15:30:09 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:29.943 15:30:09 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:29.943 15:30:09 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:29.943 15:30:09 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:29.943 15:30:09 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:29.943 15:30:09 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:29.943 15:30:09 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:29.943 15:30:09 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:29.943 15:30:09 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:29.943 15:30:09 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:29.943 15:30:09 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:29.943 15:30:09 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:29.943 15:30:09 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:29.943 15:30:09 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:29.943 15:30:09 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:29.943 15:30:09 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:29.943 15:30:09 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:29.943 15:30:09 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:29.944 15:30:09 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:29.944 15:30:09 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:29.944 15:30:09 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:29.944 15:30:09 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:29.944 15:30:09 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:29.944 15:30:09 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:29.944 15:30:09 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:29.944 15:30:09 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:29.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:29.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:06:29.944 00:06:29.944 --- 10.0.0.2 ping statistics --- 00:06:29.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.944 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:06:29.944 15:30:09 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:29.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:29.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:06:29.944 00:06:29.944 --- 10.0.0.1 ping statistics --- 00:06:29.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:29.944 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:06:29.944 15:30:09 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:29.944 15:30:09 -- nvmf/common.sh@410 -- # return 0 00:06:29.944 15:30:09 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:29.944 15:30:09 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:29.944 15:30:09 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:29.944 15:30:09 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:29.944 15:30:09 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:29.944 15:30:09 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:29.944 15:30:09 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:30.202 15:30:09 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:06:30.202 15:30:09 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:30.202 15:30:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.202 15:30:09 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 ************************************ 00:06:30.202 START TEST nvmf_filesystem_no_in_capsule 00:06:30.202 ************************************ 00:06:30.202 15:30:09 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:06:30.202 15:30:09 -- target/filesystem.sh@47 -- # in_capsule=0 00:06:30.202 15:30:09 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:30.202 15:30:09 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:30.202 15:30:09 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:30.202 15:30:09 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 15:30:09 -- nvmf/common.sh@469 -- # nvmfpid=2015793 00:06:30.202 15:30:09 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:30.202 15:30:09 -- nvmf/common.sh@470 -- # waitforlisten 2015793 00:06:30.202 15:30:09 -- common/autotest_common.sh@819 -- # '[' -z 2015793 ']' 00:06:30.202 15:30:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.202 15:30:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:30.202 15:30:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.202 15:30:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:30.202 15:30:09 -- common/autotest_common.sh@10 -- # set +x 00:06:30.202 [2024-07-10 15:30:09.383761] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:30.202 [2024-07-10 15:30:09.383845] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:30.202 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.202 [2024-07-10 15:30:09.448980] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:30.202 [2024-07-10 15:30:09.564024] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.202 [2024-07-10 15:30:09.564175] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:30.202 [2024-07-10 15:30:09.564193] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:30.202 [2024-07-10 15:30:09.564206] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:30.202 [2024-07-10 15:30:09.564256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:30.202 [2024-07-10 15:30:09.564316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:30.202 [2024-07-10 15:30:09.564374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:30.202 [2024-07-10 15:30:09.564376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.134 15:30:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.134 15:30:10 -- common/autotest_common.sh@852 -- # return 0 00:06:31.134 15:30:10 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:31.134 15:30:10 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:31.134 15:30:10 -- common/autotest_common.sh@10 -- # set +x 00:06:31.134 15:30:10 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:31.134 15:30:10 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:31.134 15:30:10 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:06:31.134 15:30:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:31.134 15:30:10 -- common/autotest_common.sh@10 -- # set +x 00:06:31.134 [2024-07-10 15:30:10.369999] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:31.134 15:30:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:31.134 15:30:10 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:31.134 15:30:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:31.134 15:30:10 -- common/autotest_common.sh@10 -- # set +x 00:06:31.393 Malloc1 00:06:31.393 15:30:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:31.393 15:30:10 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:31.393 15:30:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:31.393 15:30:10 -- common/autotest_common.sh@10 -- # set +x 00:06:31.393 15:30:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:31.393 15:30:10 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:31.393 15:30:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:31.393 15:30:10 -- common/autotest_common.sh@10 -- # set +x 00:06:31.393 15:30:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:31.393 15:30:10 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:31.393 15:30:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:31.393 15:30:10 -- common/autotest_common.sh@10 -- # set +x 00:06:31.393 [2024-07-10 15:30:10.551150] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:31.393 15:30:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:31.393 15:30:10 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:31.393 15:30:10 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:06:31.393 15:30:10 -- common/autotest_common.sh@1358 -- # local bdev_info 00:06:31.393 15:30:10 -- common/autotest_common.sh@1359 -- # local bs 00:06:31.393 15:30:10 -- common/autotest_common.sh@1360 -- # local nb 00:06:31.393 15:30:10 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:31.393 15:30:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:31.393 15:30:10 -- common/autotest_common.sh@10 -- # set +x 00:06:31.393 15:30:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:31.393 15:30:10 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:06:31.393 { 00:06:31.393 "name": "Malloc1", 00:06:31.393 "aliases": [ 00:06:31.393 "b9ddcbcb-07df-4614-9614-232966f020aa" 00:06:31.393 ], 00:06:31.393 "product_name": "Malloc disk", 00:06:31.393 "block_size": 512, 00:06:31.393 "num_blocks": 1048576, 00:06:31.393 "uuid": "b9ddcbcb-07df-4614-9614-232966f020aa", 00:06:31.393 "assigned_rate_limits": { 00:06:31.393 "rw_ios_per_sec": 0, 00:06:31.393 "rw_mbytes_per_sec": 0, 00:06:31.393 "r_mbytes_per_sec": 0, 00:06:31.393 "w_mbytes_per_sec": 0 00:06:31.393 }, 00:06:31.393 "claimed": true, 00:06:31.393 "claim_type": "exclusive_write", 00:06:31.393 "zoned": false, 00:06:31.393 "supported_io_types": { 00:06:31.393 "read": true, 00:06:31.393 "write": true, 00:06:31.393 "unmap": true, 00:06:31.393 "write_zeroes": true, 00:06:31.393 "flush": true, 00:06:31.393 "reset": true, 00:06:31.393 "compare": false, 00:06:31.393 "compare_and_write": false, 00:06:31.393 "abort": true, 00:06:31.393 "nvme_admin": false, 00:06:31.393 "nvme_io": false 00:06:31.393 }, 00:06:31.393 "memory_domains": [ 00:06:31.393 { 00:06:31.393 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:31.393 "dma_device_type": 2 00:06:31.393 } 00:06:31.393 ], 00:06:31.393 "driver_specific": {} 00:06:31.393 } 00:06:31.393 ]' 00:06:31.393 15:30:10 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:06:31.393 15:30:10 -- common/autotest_common.sh@1362 -- # bs=512 00:06:31.393 15:30:10 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:06:31.393 15:30:10 -- common/autotest_common.sh@1363 -- # nb=1048576 00:06:31.393 15:30:10 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:06:31.393 15:30:10 -- common/autotest_common.sh@1367 -- # echo 512 00:06:31.393 15:30:10 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:31.393 15:30:10 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:32.325 15:30:11 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:32.325 15:30:11 -- common/autotest_common.sh@1177 -- # local i=0 00:06:32.325 15:30:11 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:06:32.325 15:30:11 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:06:32.325 15:30:11 -- common/autotest_common.sh@1184 -- # sleep 2 00:06:34.216 15:30:13 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:06:34.216 15:30:13 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:06:34.216 15:30:13 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:06:34.216 15:30:13 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:06:34.216 15:30:13 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:06:34.216 15:30:13 -- common/autotest_common.sh@1187 -- # return 0 00:06:34.216 15:30:13 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:34.216 15:30:13 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:34.216 15:30:13 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:34.216 15:30:13 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:34.216 15:30:13 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:34.216 15:30:13 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:34.216 15:30:13 -- setup/common.sh@80 -- # echo 536870912 00:06:34.216 15:30:13 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:34.216 15:30:13 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:34.216 15:30:13 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:34.216 15:30:13 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:34.472 15:30:13 -- target/filesystem.sh@69 -- # partprobe 00:06:35.403 15:30:14 -- target/filesystem.sh@70 -- # sleep 1 00:06:36.335 15:30:15 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:06:36.335 15:30:15 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:36.335 15:30:15 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:36.335 15:30:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.335 15:30:15 -- common/autotest_common.sh@10 -- # set +x 00:06:36.335 ************************************ 00:06:36.335 START TEST filesystem_ext4 00:06:36.335 ************************************ 00:06:36.335 15:30:15 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:36.335 15:30:15 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:36.335 15:30:15 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:36.335 15:30:15 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:36.335 15:30:15 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:06:36.335 15:30:15 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:36.335 15:30:15 -- common/autotest_common.sh@904 -- # local i=0 00:06:36.335 15:30:15 -- common/autotest_common.sh@905 -- # local force 00:06:36.335 15:30:15 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:06:36.335 15:30:15 -- common/autotest_common.sh@908 -- # force=-F 00:06:36.335 15:30:15 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:36.335 mke2fs 1.46.5 (30-Dec-2021) 00:06:36.592 Discarding device blocks: 0/522240 done 00:06:36.592 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:36.592 Filesystem UUID: d80b4dc7-f511-4fb5-bbe5-edf668e66d58 00:06:36.592 Superblock backups stored on blocks: 00:06:36.592 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:36.592 00:06:36.592 Allocating group tables: 0/64 done 00:06:36.592 Writing inode tables: 0/64 done 00:06:36.849 Creating journal (8192 blocks): done 00:06:37.106 Writing superblocks and filesystem accounting information: 0/64 done 00:06:37.106 00:06:37.106 15:30:16 -- common/autotest_common.sh@921 -- # return 0 00:06:37.106 15:30:16 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:37.106 15:30:16 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:37.363 15:30:16 -- target/filesystem.sh@25 -- # sync 00:06:37.363 15:30:16 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:37.363 15:30:16 -- target/filesystem.sh@27 -- # sync 00:06:37.363 15:30:16 -- target/filesystem.sh@29 -- # i=0 00:06:37.363 15:30:16 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:37.363 15:30:16 -- target/filesystem.sh@37 -- # kill -0 2015793 00:06:37.363 15:30:16 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:37.363 15:30:16 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:37.363 15:30:16 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:37.363 15:30:16 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:37.364 00:06:37.364 real 0m0.862s 00:06:37.364 user 0m0.012s 00:06:37.364 sys 0m0.060s 00:06:37.364 15:30:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.364 15:30:16 -- common/autotest_common.sh@10 -- # set +x 00:06:37.364 ************************************ 00:06:37.364 END TEST filesystem_ext4 00:06:37.364 ************************************ 00:06:37.364 15:30:16 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:37.364 15:30:16 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:37.364 15:30:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.364 15:30:16 -- common/autotest_common.sh@10 -- # set +x 00:06:37.364 ************************************ 00:06:37.364 START TEST filesystem_btrfs 00:06:37.364 ************************************ 00:06:37.364 15:30:16 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:37.364 15:30:16 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:37.364 15:30:16 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:37.364 15:30:16 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:37.364 15:30:16 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:06:37.364 15:30:16 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:37.364 15:30:16 -- common/autotest_common.sh@904 -- # local i=0 00:06:37.364 15:30:16 -- common/autotest_common.sh@905 -- # local force 00:06:37.364 15:30:16 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:06:37.364 15:30:16 -- common/autotest_common.sh@910 -- # force=-f 00:06:37.364 15:30:16 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:37.621 btrfs-progs v6.6.2 00:06:37.621 See https://btrfs.readthedocs.io for more information. 00:06:37.621 00:06:37.621 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:37.621 NOTE: several default settings have changed in version 5.15, please make sure 00:06:37.621 this does not affect your deployments: 00:06:37.621 - DUP for metadata (-m dup) 00:06:37.621 - enabled no-holes (-O no-holes) 00:06:37.621 - enabled free-space-tree (-R free-space-tree) 00:06:37.621 00:06:37.621 Label: (null) 00:06:37.621 UUID: 0ef4f09d-5eb1-44b6-9c27-77120abce399 00:06:37.621 Node size: 16384 00:06:37.621 Sector size: 4096 00:06:37.621 Filesystem size: 510.00MiB 00:06:37.621 Block group profiles: 00:06:37.621 Data: single 8.00MiB 00:06:37.621 Metadata: DUP 32.00MiB 00:06:37.621 System: DUP 8.00MiB 00:06:37.621 SSD detected: yes 00:06:37.621 Zoned device: no 00:06:37.621 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:37.621 Runtime features: free-space-tree 00:06:37.621 Checksum: crc32c 00:06:37.621 Number of devices: 1 00:06:37.621 Devices: 00:06:37.621 ID SIZE PATH 00:06:37.621 1 510.00MiB /dev/nvme0n1p1 00:06:37.621 00:06:37.621 15:30:16 -- common/autotest_common.sh@921 -- # return 0 00:06:37.621 15:30:16 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:38.553 15:30:17 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:38.553 15:30:17 -- target/filesystem.sh@25 -- # sync 00:06:38.553 15:30:17 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:38.553 15:30:17 -- target/filesystem.sh@27 -- # sync 00:06:38.553 15:30:17 -- target/filesystem.sh@29 -- # i=0 00:06:38.553 15:30:17 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:38.553 15:30:17 -- target/filesystem.sh@37 -- # kill -0 2015793 00:06:38.553 15:30:17 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:38.553 15:30:17 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:38.553 15:30:17 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:38.553 15:30:17 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:38.553 00:06:38.553 real 0m1.321s 00:06:38.553 user 0m0.011s 00:06:38.553 sys 0m0.123s 00:06:38.553 15:30:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.553 15:30:17 -- common/autotest_common.sh@10 -- # set +x 00:06:38.553 ************************************ 00:06:38.553 END TEST filesystem_btrfs 00:06:38.553 ************************************ 00:06:38.553 15:30:17 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:06:38.553 15:30:17 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:38.553 15:30:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.553 15:30:17 -- common/autotest_common.sh@10 -- # set +x 00:06:38.553 ************************************ 00:06:38.553 START TEST filesystem_xfs 00:06:38.553 ************************************ 00:06:38.553 15:30:17 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:06:38.553 15:30:17 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:38.553 15:30:17 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:38.553 15:30:17 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:38.553 15:30:17 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:06:38.553 15:30:17 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:38.553 15:30:17 -- common/autotest_common.sh@904 -- # local i=0 00:06:38.553 15:30:17 -- common/autotest_common.sh@905 -- # local force 00:06:38.811 15:30:17 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:06:38.811 15:30:17 -- common/autotest_common.sh@910 -- # force=-f 00:06:38.811 15:30:17 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:38.811 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:38.811 = sectsz=512 attr=2, projid32bit=1 00:06:38.811 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:38.811 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:38.811 data = bsize=4096 blocks=130560, imaxpct=25 00:06:38.811 = sunit=0 swidth=0 blks 00:06:38.811 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:38.811 log =internal log bsize=4096 blocks=16384, version=2 00:06:38.811 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:38.811 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:39.741 Discarding blocks...Done. 00:06:39.741 15:30:18 -- common/autotest_common.sh@921 -- # return 0 00:06:39.741 15:30:18 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:41.652 15:30:20 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:41.652 15:30:20 -- target/filesystem.sh@25 -- # sync 00:06:41.652 15:30:20 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:41.652 15:30:20 -- target/filesystem.sh@27 -- # sync 00:06:41.652 15:30:20 -- target/filesystem.sh@29 -- # i=0 00:06:41.652 15:30:20 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:41.652 15:30:20 -- target/filesystem.sh@37 -- # kill -0 2015793 00:06:41.652 15:30:20 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:41.652 15:30:20 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:41.652 15:30:20 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:41.652 15:30:20 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:41.652 00:06:41.652 real 0m2.771s 00:06:41.652 user 0m0.019s 00:06:41.652 sys 0m0.057s 00:06:41.652 15:30:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.652 15:30:20 -- common/autotest_common.sh@10 -- # set +x 00:06:41.652 ************************************ 00:06:41.652 END TEST filesystem_xfs 00:06:41.652 ************************************ 00:06:41.652 15:30:20 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:41.652 15:30:20 -- target/filesystem.sh@93 -- # sync 00:06:41.652 15:30:20 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:41.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:41.910 15:30:21 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:41.910 15:30:21 -- common/autotest_common.sh@1198 -- # local i=0 00:06:41.910 15:30:21 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:06:41.910 15:30:21 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:41.910 15:30:21 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:06:41.910 15:30:21 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:41.910 15:30:21 -- common/autotest_common.sh@1210 -- # return 0 00:06:41.910 15:30:21 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:41.910 15:30:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:41.910 15:30:21 -- common/autotest_common.sh@10 -- # set +x 00:06:41.910 15:30:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:41.910 15:30:21 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:41.910 15:30:21 -- target/filesystem.sh@101 -- # killprocess 2015793 00:06:41.910 15:30:21 -- common/autotest_common.sh@926 -- # '[' -z 2015793 ']' 00:06:41.910 15:30:21 -- common/autotest_common.sh@930 -- # kill -0 2015793 00:06:41.910 15:30:21 -- common/autotest_common.sh@931 -- # uname 00:06:41.910 15:30:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:41.910 15:30:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2015793 00:06:41.910 15:30:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:41.910 15:30:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:41.910 15:30:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2015793' 00:06:41.910 killing process with pid 2015793 00:06:41.910 15:30:21 -- common/autotest_common.sh@945 -- # kill 2015793 00:06:41.911 15:30:21 -- common/autotest_common.sh@950 -- # wait 2015793 00:06:42.476 15:30:21 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:42.476 00:06:42.476 real 0m12.264s 00:06:42.476 user 0m47.085s 00:06:42.476 sys 0m1.836s 00:06:42.476 15:30:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.476 15:30:21 -- common/autotest_common.sh@10 -- # set +x 00:06:42.476 ************************************ 00:06:42.476 END TEST nvmf_filesystem_no_in_capsule 00:06:42.476 ************************************ 00:06:42.476 15:30:21 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:06:42.476 15:30:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:42.476 15:30:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.476 15:30:21 -- common/autotest_common.sh@10 -- # set +x 00:06:42.476 ************************************ 00:06:42.476 START TEST nvmf_filesystem_in_capsule 00:06:42.476 ************************************ 00:06:42.476 15:30:21 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:06:42.476 15:30:21 -- target/filesystem.sh@47 -- # in_capsule=4096 00:06:42.476 15:30:21 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:06:42.476 15:30:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:42.476 15:30:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:42.476 15:30:21 -- common/autotest_common.sh@10 -- # set +x 00:06:42.476 15:30:21 -- nvmf/common.sh@469 -- # nvmfpid=2017492 00:06:42.476 15:30:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:42.476 15:30:21 -- nvmf/common.sh@470 -- # waitforlisten 2017492 00:06:42.476 15:30:21 -- common/autotest_common.sh@819 -- # '[' -z 2017492 ']' 00:06:42.476 15:30:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.476 15:30:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:42.476 15:30:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.476 15:30:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:42.476 15:30:21 -- common/autotest_common.sh@10 -- # set +x 00:06:42.476 [2024-07-10 15:30:21.682182] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:42.476 [2024-07-10 15:30:21.682262] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.476 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.476 [2024-07-10 15:30:21.752835] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:42.733 [2024-07-10 15:30:21.874912] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.733 [2024-07-10 15:30:21.875067] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.733 [2024-07-10 15:30:21.875087] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.733 [2024-07-10 15:30:21.875103] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.733 [2024-07-10 15:30:21.875161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.733 [2024-07-10 15:30:21.875216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.733 [2024-07-10 15:30:21.875336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.733 [2024-07-10 15:30:21.875338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.298 15:30:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:43.298 15:30:22 -- common/autotest_common.sh@852 -- # return 0 00:06:43.298 15:30:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:43.298 15:30:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:43.298 15:30:22 -- common/autotest_common.sh@10 -- # set +x 00:06:43.555 15:30:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:43.555 15:30:22 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:06:43.555 15:30:22 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:06:43.555 15:30:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.555 15:30:22 -- common/autotest_common.sh@10 -- # set +x 00:06:43.555 [2024-07-10 15:30:22.684996] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:43.555 15:30:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:43.555 15:30:22 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:06:43.555 15:30:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.555 15:30:22 -- common/autotest_common.sh@10 -- # set +x 00:06:43.555 Malloc1 00:06:43.555 15:30:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:43.555 15:30:22 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:06:43.555 15:30:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.555 15:30:22 -- common/autotest_common.sh@10 -- # set +x 00:06:43.555 15:30:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:43.555 15:30:22 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:06:43.555 15:30:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.555 15:30:22 -- common/autotest_common.sh@10 -- # set +x 00:06:43.555 15:30:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:43.555 15:30:22 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:43.555 15:30:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.556 15:30:22 -- common/autotest_common.sh@10 -- # set +x 00:06:43.556 [2024-07-10 15:30:22.875072] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:43.556 15:30:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:43.556 15:30:22 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:06:43.556 15:30:22 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:06:43.556 15:30:22 -- common/autotest_common.sh@1358 -- # local bdev_info 00:06:43.556 15:30:22 -- common/autotest_common.sh@1359 -- # local bs 00:06:43.556 15:30:22 -- common/autotest_common.sh@1360 -- # local nb 00:06:43.556 15:30:22 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:06:43.556 15:30:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:43.556 15:30:22 -- common/autotest_common.sh@10 -- # set +x 00:06:43.556 15:30:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:43.556 15:30:22 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:06:43.556 { 00:06:43.556 "name": "Malloc1", 00:06:43.556 "aliases": [ 00:06:43.556 "8d4fab9b-e842-47b4-88ed-6c50a7620f3c" 00:06:43.556 ], 00:06:43.556 "product_name": "Malloc disk", 00:06:43.556 "block_size": 512, 00:06:43.556 "num_blocks": 1048576, 00:06:43.556 "uuid": "8d4fab9b-e842-47b4-88ed-6c50a7620f3c", 00:06:43.556 "assigned_rate_limits": { 00:06:43.556 "rw_ios_per_sec": 0, 00:06:43.556 "rw_mbytes_per_sec": 0, 00:06:43.556 "r_mbytes_per_sec": 0, 00:06:43.556 "w_mbytes_per_sec": 0 00:06:43.556 }, 00:06:43.556 "claimed": true, 00:06:43.556 "claim_type": "exclusive_write", 00:06:43.556 "zoned": false, 00:06:43.556 "supported_io_types": { 00:06:43.556 "read": true, 00:06:43.556 "write": true, 00:06:43.556 "unmap": true, 00:06:43.556 "write_zeroes": true, 00:06:43.556 "flush": true, 00:06:43.556 "reset": true, 00:06:43.556 "compare": false, 00:06:43.556 "compare_and_write": false, 00:06:43.556 "abort": true, 00:06:43.556 "nvme_admin": false, 00:06:43.556 "nvme_io": false 00:06:43.556 }, 00:06:43.556 "memory_domains": [ 00:06:43.556 { 00:06:43.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:43.556 "dma_device_type": 2 00:06:43.556 } 00:06:43.556 ], 00:06:43.556 "driver_specific": {} 00:06:43.556 } 00:06:43.556 ]' 00:06:43.556 15:30:22 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:06:43.556 15:30:22 -- common/autotest_common.sh@1362 -- # bs=512 00:06:43.556 15:30:22 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:06:43.812 15:30:22 -- common/autotest_common.sh@1363 -- # nb=1048576 00:06:43.812 15:30:22 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:06:43.812 15:30:22 -- common/autotest_common.sh@1367 -- # echo 512 00:06:43.812 15:30:22 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:06:43.812 15:30:22 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:06:44.375 15:30:23 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:06:44.375 15:30:23 -- common/autotest_common.sh@1177 -- # local i=0 00:06:44.375 15:30:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:06:44.375 15:30:23 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:06:44.375 15:30:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:06:46.899 15:30:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:06:46.899 15:30:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:06:46.899 15:30:25 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:06:46.899 15:30:25 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:06:46.899 15:30:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:06:46.899 15:30:25 -- common/autotest_common.sh@1187 -- # return 0 00:06:46.899 15:30:25 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:06:46.899 15:30:25 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:06:46.899 15:30:25 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:06:46.899 15:30:25 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:06:46.899 15:30:25 -- setup/common.sh@76 -- # local dev=nvme0n1 00:06:46.899 15:30:25 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:06:46.899 15:30:25 -- setup/common.sh@80 -- # echo 536870912 00:06:46.899 15:30:25 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:06:46.899 15:30:25 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:06:46.899 15:30:25 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:06:46.899 15:30:25 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:06:46.899 15:30:25 -- target/filesystem.sh@69 -- # partprobe 00:06:46.899 15:30:26 -- target/filesystem.sh@70 -- # sleep 1 00:06:48.273 15:30:27 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:06:48.273 15:30:27 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:06:48.273 15:30:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:48.273 15:30:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.273 15:30:27 -- common/autotest_common.sh@10 -- # set +x 00:06:48.273 ************************************ 00:06:48.273 START TEST filesystem_in_capsule_ext4 00:06:48.273 ************************************ 00:06:48.273 15:30:27 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:06:48.273 15:30:27 -- target/filesystem.sh@18 -- # fstype=ext4 00:06:48.273 15:30:27 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:48.273 15:30:27 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:06:48.273 15:30:27 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:06:48.273 15:30:27 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:48.273 15:30:27 -- common/autotest_common.sh@904 -- # local i=0 00:06:48.273 15:30:27 -- common/autotest_common.sh@905 -- # local force 00:06:48.273 15:30:27 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:06:48.273 15:30:27 -- common/autotest_common.sh@908 -- # force=-F 00:06:48.273 15:30:27 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:06:48.273 mke2fs 1.46.5 (30-Dec-2021) 00:06:48.273 Discarding device blocks: 0/522240 done 00:06:48.273 Creating filesystem with 522240 1k blocks and 130560 inodes 00:06:48.273 Filesystem UUID: 8f52f214-2f4b-4f20-a132-4197f6bde175 00:06:48.273 Superblock backups stored on blocks: 00:06:48.273 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:06:48.273 00:06:48.273 Allocating group tables: 0/64 done 00:06:48.273 Writing inode tables: 0/64 done 00:06:48.273 Creating journal (8192 blocks): done 00:06:48.273 Writing superblocks and filesystem accounting information: 0/64 done 00:06:48.273 00:06:48.273 15:30:27 -- common/autotest_common.sh@921 -- # return 0 00:06:48.273 15:30:27 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:48.532 15:30:27 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:48.532 15:30:27 -- target/filesystem.sh@25 -- # sync 00:06:48.532 15:30:27 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:48.532 15:30:27 -- target/filesystem.sh@27 -- # sync 00:06:48.532 15:30:27 -- target/filesystem.sh@29 -- # i=0 00:06:48.532 15:30:27 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:48.532 15:30:27 -- target/filesystem.sh@37 -- # kill -0 2017492 00:06:48.532 15:30:27 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:48.532 15:30:27 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:48.532 15:30:27 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:48.532 15:30:27 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:48.532 00:06:48.532 real 0m0.496s 00:06:48.532 user 0m0.024s 00:06:48.532 sys 0m0.042s 00:06:48.532 15:30:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.532 15:30:27 -- common/autotest_common.sh@10 -- # set +x 00:06:48.532 ************************************ 00:06:48.532 END TEST filesystem_in_capsule_ext4 00:06:48.532 ************************************ 00:06:48.532 15:30:27 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:06:48.532 15:30:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:48.532 15:30:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.532 15:30:27 -- common/autotest_common.sh@10 -- # set +x 00:06:48.532 ************************************ 00:06:48.532 START TEST filesystem_in_capsule_btrfs 00:06:48.532 ************************************ 00:06:48.532 15:30:27 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:06:48.532 15:30:27 -- target/filesystem.sh@18 -- # fstype=btrfs 00:06:48.532 15:30:27 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:48.532 15:30:27 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:06:48.532 15:30:27 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:06:48.532 15:30:27 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:48.532 15:30:27 -- common/autotest_common.sh@904 -- # local i=0 00:06:48.532 15:30:27 -- common/autotest_common.sh@905 -- # local force 00:06:48.532 15:30:27 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:06:48.532 15:30:27 -- common/autotest_common.sh@910 -- # force=-f 00:06:48.532 15:30:27 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:06:48.790 btrfs-progs v6.6.2 00:06:48.790 See https://btrfs.readthedocs.io for more information. 00:06:48.790 00:06:48.790 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:06:48.790 NOTE: several default settings have changed in version 5.15, please make sure 00:06:48.790 this does not affect your deployments: 00:06:48.790 - DUP for metadata (-m dup) 00:06:48.790 - enabled no-holes (-O no-holes) 00:06:48.790 - enabled free-space-tree (-R free-space-tree) 00:06:48.790 00:06:48.790 Label: (null) 00:06:48.790 UUID: f5af42c7-584d-42c6-ac1b-10bddc9f523d 00:06:48.790 Node size: 16384 00:06:48.790 Sector size: 4096 00:06:48.790 Filesystem size: 510.00MiB 00:06:48.790 Block group profiles: 00:06:48.790 Data: single 8.00MiB 00:06:48.790 Metadata: DUP 32.00MiB 00:06:48.790 System: DUP 8.00MiB 00:06:48.790 SSD detected: yes 00:06:48.790 Zoned device: no 00:06:48.790 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:06:48.790 Runtime features: free-space-tree 00:06:48.790 Checksum: crc32c 00:06:48.790 Number of devices: 1 00:06:48.790 Devices: 00:06:48.790 ID SIZE PATH 00:06:48.790 1 510.00MiB /dev/nvme0n1p1 00:06:48.790 00:06:48.790 15:30:27 -- common/autotest_common.sh@921 -- # return 0 00:06:48.790 15:30:27 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:49.723 15:30:28 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:49.723 15:30:28 -- target/filesystem.sh@25 -- # sync 00:06:49.723 15:30:28 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:49.723 15:30:28 -- target/filesystem.sh@27 -- # sync 00:06:49.723 15:30:28 -- target/filesystem.sh@29 -- # i=0 00:06:49.723 15:30:28 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:49.723 15:30:29 -- target/filesystem.sh@37 -- # kill -0 2017492 00:06:49.723 15:30:29 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:49.723 15:30:29 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:49.723 15:30:29 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:49.723 15:30:29 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:49.723 00:06:49.723 real 0m1.227s 00:06:49.723 user 0m0.018s 00:06:49.723 sys 0m0.113s 00:06:49.723 15:30:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.723 15:30:29 -- common/autotest_common.sh@10 -- # set +x 00:06:49.723 ************************************ 00:06:49.723 END TEST filesystem_in_capsule_btrfs 00:06:49.723 ************************************ 00:06:49.723 15:30:29 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:06:49.723 15:30:29 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:49.723 15:30:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.723 15:30:29 -- common/autotest_common.sh@10 -- # set +x 00:06:49.723 ************************************ 00:06:49.723 START TEST filesystem_in_capsule_xfs 00:06:49.723 ************************************ 00:06:49.723 15:30:29 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:06:49.723 15:30:29 -- target/filesystem.sh@18 -- # fstype=xfs 00:06:49.723 15:30:29 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:06:49.723 15:30:29 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:06:49.723 15:30:29 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:06:49.723 15:30:29 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:06:49.723 15:30:29 -- common/autotest_common.sh@904 -- # local i=0 00:06:49.723 15:30:29 -- common/autotest_common.sh@905 -- # local force 00:06:49.723 15:30:29 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:06:49.723 15:30:29 -- common/autotest_common.sh@910 -- # force=-f 00:06:49.723 15:30:29 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:06:49.981 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:06:49.981 = sectsz=512 attr=2, projid32bit=1 00:06:49.981 = crc=1 finobt=1, sparse=1, rmapbt=0 00:06:49.981 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:06:49.981 data = bsize=4096 blocks=130560, imaxpct=25 00:06:49.981 = sunit=0 swidth=0 blks 00:06:49.981 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:06:49.981 log =internal log bsize=4096 blocks=16384, version=2 00:06:49.981 = sectsz=512 sunit=0 blks, lazy-count=1 00:06:49.981 realtime =none extsz=4096 blocks=0, rtextents=0 00:06:50.913 Discarding blocks...Done. 00:06:50.913 15:30:30 -- common/autotest_common.sh@921 -- # return 0 00:06:50.913 15:30:30 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:06:52.811 15:30:32 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:06:52.811 15:30:32 -- target/filesystem.sh@25 -- # sync 00:06:52.811 15:30:32 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:06:52.811 15:30:32 -- target/filesystem.sh@27 -- # sync 00:06:52.811 15:30:32 -- target/filesystem.sh@29 -- # i=0 00:06:52.811 15:30:32 -- target/filesystem.sh@30 -- # umount /mnt/device 00:06:52.811 15:30:32 -- target/filesystem.sh@37 -- # kill -0 2017492 00:06:52.811 15:30:32 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:06:52.811 15:30:32 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:06:52.811 15:30:32 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:06:52.811 15:30:32 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:06:52.811 00:06:52.811 real 0m3.106s 00:06:52.811 user 0m0.011s 00:06:52.811 sys 0m0.069s 00:06:52.811 15:30:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.811 15:30:32 -- common/autotest_common.sh@10 -- # set +x 00:06:52.811 ************************************ 00:06:52.811 END TEST filesystem_in_capsule_xfs 00:06:52.811 ************************************ 00:06:52.811 15:30:32 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:06:53.069 15:30:32 -- target/filesystem.sh@93 -- # sync 00:06:53.069 15:30:32 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:06:53.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:06:53.069 15:30:32 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:06:53.069 15:30:32 -- common/autotest_common.sh@1198 -- # local i=0 00:06:53.069 15:30:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:06:53.069 15:30:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:53.069 15:30:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:06:53.069 15:30:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:06:53.069 15:30:32 -- common/autotest_common.sh@1210 -- # return 0 00:06:53.069 15:30:32 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:53.069 15:30:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:53.069 15:30:32 -- common/autotest_common.sh@10 -- # set +x 00:06:53.069 15:30:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:53.069 15:30:32 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:06:53.069 15:30:32 -- target/filesystem.sh@101 -- # killprocess 2017492 00:06:53.069 15:30:32 -- common/autotest_common.sh@926 -- # '[' -z 2017492 ']' 00:06:53.069 15:30:32 -- common/autotest_common.sh@930 -- # kill -0 2017492 00:06:53.069 15:30:32 -- common/autotest_common.sh@931 -- # uname 00:06:53.069 15:30:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:53.069 15:30:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2017492 00:06:53.069 15:30:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:53.069 15:30:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:53.069 15:30:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2017492' 00:06:53.069 killing process with pid 2017492 00:06:53.070 15:30:32 -- common/autotest_common.sh@945 -- # kill 2017492 00:06:53.070 15:30:32 -- common/autotest_common.sh@950 -- # wait 2017492 00:06:53.636 15:30:32 -- target/filesystem.sh@102 -- # nvmfpid= 00:06:53.636 00:06:53.636 real 0m11.253s 00:06:53.636 user 0m43.072s 00:06:53.636 sys 0m1.710s 00:06:53.636 15:30:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.636 15:30:32 -- common/autotest_common.sh@10 -- # set +x 00:06:53.636 ************************************ 00:06:53.636 END TEST nvmf_filesystem_in_capsule 00:06:53.636 ************************************ 00:06:53.636 15:30:32 -- target/filesystem.sh@108 -- # nvmftestfini 00:06:53.636 15:30:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:53.636 15:30:32 -- nvmf/common.sh@116 -- # sync 00:06:53.636 15:30:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:53.636 15:30:32 -- nvmf/common.sh@119 -- # set +e 00:06:53.636 15:30:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:53.636 15:30:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:53.636 rmmod nvme_tcp 00:06:53.636 rmmod nvme_fabrics 00:06:53.636 rmmod nvme_keyring 00:06:53.636 15:30:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:53.636 15:30:32 -- nvmf/common.sh@123 -- # set -e 00:06:53.636 15:30:32 -- nvmf/common.sh@124 -- # return 0 00:06:53.636 15:30:32 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:06:53.636 15:30:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:53.636 15:30:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:53.636 15:30:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:53.636 15:30:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:53.636 15:30:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:53.636 15:30:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:53.636 15:30:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:53.636 15:30:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:56.169 15:30:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:06:56.169 00:06:56.169 real 0m27.909s 00:06:56.169 user 1m31.007s 00:06:56.169 sys 0m5.088s 00:06:56.169 15:30:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.169 15:30:35 -- common/autotest_common.sh@10 -- # set +x 00:06:56.169 ************************************ 00:06:56.169 END TEST nvmf_filesystem 00:06:56.169 ************************************ 00:06:56.169 15:30:35 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:56.169 15:30:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:56.169 15:30:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.169 15:30:35 -- common/autotest_common.sh@10 -- # set +x 00:06:56.170 ************************************ 00:06:56.170 START TEST nvmf_discovery 00:06:56.170 ************************************ 00:06:56.170 15:30:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:06:56.170 * Looking for test storage... 00:06:56.170 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:56.170 15:30:35 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:56.170 15:30:35 -- nvmf/common.sh@7 -- # uname -s 00:06:56.170 15:30:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:56.170 15:30:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:56.170 15:30:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:56.170 15:30:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:56.170 15:30:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:56.170 15:30:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:56.170 15:30:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:56.170 15:30:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:56.170 15:30:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:56.170 15:30:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:56.170 15:30:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:56.170 15:30:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:56.170 15:30:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:56.170 15:30:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:56.170 15:30:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:56.170 15:30:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:56.170 15:30:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:56.170 15:30:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:56.170 15:30:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:56.170 15:30:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.170 15:30:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.170 15:30:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.170 15:30:35 -- paths/export.sh@5 -- # export PATH 00:06:56.170 15:30:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:56.170 15:30:35 -- nvmf/common.sh@46 -- # : 0 00:06:56.170 15:30:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:56.170 15:30:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:56.170 15:30:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:56.170 15:30:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:56.170 15:30:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:56.170 15:30:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:56.170 15:30:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:56.170 15:30:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:56.170 15:30:35 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:06:56.170 15:30:35 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:06:56.170 15:30:35 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:06:56.170 15:30:35 -- target/discovery.sh@15 -- # hash nvme 00:06:56.170 15:30:35 -- target/discovery.sh@20 -- # nvmftestinit 00:06:56.170 15:30:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:06:56.170 15:30:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:56.170 15:30:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:56.170 15:30:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:56.170 15:30:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:56.170 15:30:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:56.170 15:30:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:56.170 15:30:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:56.170 15:30:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:06:56.170 15:30:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:56.170 15:30:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:56.170 15:30:35 -- common/autotest_common.sh@10 -- # set +x 00:06:58.182 15:30:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:58.182 15:30:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:58.182 15:30:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:58.182 15:30:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:58.182 15:30:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:58.182 15:30:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:58.182 15:30:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:58.182 15:30:37 -- nvmf/common.sh@294 -- # net_devs=() 00:06:58.182 15:30:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:58.182 15:30:37 -- nvmf/common.sh@295 -- # e810=() 00:06:58.182 15:30:37 -- nvmf/common.sh@295 -- # local -ga e810 00:06:58.182 15:30:37 -- nvmf/common.sh@296 -- # x722=() 00:06:58.182 15:30:37 -- nvmf/common.sh@296 -- # local -ga x722 00:06:58.183 15:30:37 -- nvmf/common.sh@297 -- # mlx=() 00:06:58.183 15:30:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:58.183 15:30:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:58.183 15:30:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:58.183 15:30:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:58.183 15:30:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:58.183 15:30:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:58.183 15:30:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:58.183 15:30:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:58.183 15:30:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:58.183 15:30:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:58.183 15:30:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:58.183 15:30:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:58.183 15:30:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:58.183 15:30:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:06:58.183 15:30:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:06:58.183 15:30:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:06:58.183 15:30:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:06:58.183 15:30:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:58.183 15:30:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:58.183 15:30:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:58.183 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:58.183 15:30:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:58.183 15:30:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:58.183 15:30:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.183 15:30:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.183 15:30:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:58.183 15:30:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:58.183 15:30:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:58.183 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:58.183 15:30:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:06:58.183 15:30:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:06:58.183 15:30:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:58.183 15:30:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:58.183 15:30:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:06:58.183 15:30:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:58.183 15:30:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:06:58.183 15:30:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:06:58.183 15:30:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:58.183 15:30:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.183 15:30:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:58.183 15:30:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.183 15:30:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:58.183 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:58.183 15:30:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.183 15:30:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:58.183 15:30:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:58.183 15:30:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:58.183 15:30:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:58.183 15:30:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:58.183 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:58.183 15:30:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:58.183 15:30:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:58.183 15:30:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:58.183 15:30:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:58.183 15:30:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:06:58.183 15:30:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:06:58.183 15:30:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:58.183 15:30:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:58.183 15:30:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:58.183 15:30:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:06:58.183 15:30:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:58.183 15:30:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:58.183 15:30:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:06:58.183 15:30:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:58.183 15:30:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:58.183 15:30:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:06:58.183 15:30:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:06:58.183 15:30:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:06:58.183 15:30:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:58.183 15:30:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:58.183 15:30:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:58.183 15:30:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:06:58.183 15:30:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:58.183 15:30:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:58.183 15:30:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:58.183 15:30:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:06:58.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:58.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:06:58.183 00:06:58.183 --- 10.0.0.2 ping statistics --- 00:06:58.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.183 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:06:58.183 15:30:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:58.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:58.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:06:58.183 00:06:58.183 --- 10.0.0.1 ping statistics --- 00:06:58.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:58.183 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:06:58.183 15:30:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:58.183 15:30:37 -- nvmf/common.sh@410 -- # return 0 00:06:58.183 15:30:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:58.183 15:30:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:58.183 15:30:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:06:58.183 15:30:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:06:58.183 15:30:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:58.183 15:30:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:06:58.183 15:30:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:06:58.183 15:30:37 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:06:58.183 15:30:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:06:58.183 15:30:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:58.183 15:30:37 -- common/autotest_common.sh@10 -- # set +x 00:06:58.183 15:30:37 -- nvmf/common.sh@469 -- # nvmfpid=2021014 00:06:58.183 15:30:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:06:58.183 15:30:37 -- nvmf/common.sh@470 -- # waitforlisten 2021014 00:06:58.183 15:30:37 -- common/autotest_common.sh@819 -- # '[' -z 2021014 ']' 00:06:58.183 15:30:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.183 15:30:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:58.183 15:30:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.183 15:30:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:58.183 15:30:37 -- common/autotest_common.sh@10 -- # set +x 00:06:58.183 [2024-07-10 15:30:37.277767] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:58.183 [2024-07-10 15:30:37.277831] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:58.183 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.183 [2024-07-10 15:30:37.346569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:58.183 [2024-07-10 15:30:37.466559] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:58.183 [2024-07-10 15:30:37.466718] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:58.183 [2024-07-10 15:30:37.466739] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:58.183 [2024-07-10 15:30:37.466754] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:58.183 [2024-07-10 15:30:37.466828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.183 [2024-07-10 15:30:37.466887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:58.183 [2024-07-10 15:30:37.466944] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:58.183 [2024-07-10 15:30:37.466947] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.118 15:30:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:59.118 15:30:38 -- common/autotest_common.sh@852 -- # return 0 00:06:59.118 15:30:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:06:59.118 15:30:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:59.118 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.118 15:30:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:59.118 15:30:38 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:59.118 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.118 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.118 [2024-07-10 15:30:38.268972] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:59.118 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.118 15:30:38 -- target/discovery.sh@26 -- # seq 1 4 00:06:59.118 15:30:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:59.118 15:30:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:06:59.118 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.118 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.118 Null1 00:06:59.118 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.118 15:30:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:59.118 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.118 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.118 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.118 15:30:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:06:59.118 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.118 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.118 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.118 15:30:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:59.118 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.118 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.118 [2024-07-10 15:30:38.309242] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:59.118 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.118 15:30:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:59.118 15:30:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:06:59.118 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.118 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.118 Null2 00:06:59.118 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.118 15:30:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:06:59.118 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.118 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.118 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.118 15:30:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:06:59.118 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.119 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.119 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.119 15:30:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:06:59.119 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.119 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.119 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.119 15:30:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:59.119 15:30:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:06:59.119 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.119 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.119 Null3 00:06:59.119 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.119 15:30:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:06:59.119 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.119 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.119 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.119 15:30:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:06:59.119 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.119 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.119 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.119 15:30:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:06:59.119 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.119 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.119 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.119 15:30:38 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:06:59.119 15:30:38 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:06:59.119 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.119 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.119 Null4 00:06:59.119 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.119 15:30:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:06:59.119 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.119 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.119 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.119 15:30:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:06:59.119 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.119 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.119 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.119 15:30:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:06:59.119 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.119 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.119 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.119 15:30:38 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:59.119 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.119 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.119 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.119 15:30:38 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:06:59.119 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.119 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.119 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.119 15:30:38 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:06:59.378 00:06:59.378 Discovery Log Number of Records 6, Generation counter 6 00:06:59.378 =====Discovery Log Entry 0====== 00:06:59.378 trtype: tcp 00:06:59.378 adrfam: ipv4 00:06:59.378 subtype: current discovery subsystem 00:06:59.378 treq: not required 00:06:59.378 portid: 0 00:06:59.378 trsvcid: 4420 00:06:59.378 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:59.378 traddr: 10.0.0.2 00:06:59.378 eflags: explicit discovery connections, duplicate discovery information 00:06:59.378 sectype: none 00:06:59.378 =====Discovery Log Entry 1====== 00:06:59.378 trtype: tcp 00:06:59.378 adrfam: ipv4 00:06:59.378 subtype: nvme subsystem 00:06:59.378 treq: not required 00:06:59.378 portid: 0 00:06:59.378 trsvcid: 4420 00:06:59.378 subnqn: nqn.2016-06.io.spdk:cnode1 00:06:59.378 traddr: 10.0.0.2 00:06:59.378 eflags: none 00:06:59.378 sectype: none 00:06:59.378 =====Discovery Log Entry 2====== 00:06:59.378 trtype: tcp 00:06:59.378 adrfam: ipv4 00:06:59.378 subtype: nvme subsystem 00:06:59.378 treq: not required 00:06:59.378 portid: 0 00:06:59.378 trsvcid: 4420 00:06:59.378 subnqn: nqn.2016-06.io.spdk:cnode2 00:06:59.378 traddr: 10.0.0.2 00:06:59.378 eflags: none 00:06:59.378 sectype: none 00:06:59.378 =====Discovery Log Entry 3====== 00:06:59.378 trtype: tcp 00:06:59.378 adrfam: ipv4 00:06:59.378 subtype: nvme subsystem 00:06:59.378 treq: not required 00:06:59.378 portid: 0 00:06:59.378 trsvcid: 4420 00:06:59.378 subnqn: nqn.2016-06.io.spdk:cnode3 00:06:59.378 traddr: 10.0.0.2 00:06:59.378 eflags: none 00:06:59.378 sectype: none 00:06:59.378 =====Discovery Log Entry 4====== 00:06:59.378 trtype: tcp 00:06:59.378 adrfam: ipv4 00:06:59.378 subtype: nvme subsystem 00:06:59.378 treq: not required 00:06:59.378 portid: 0 00:06:59.378 trsvcid: 4420 00:06:59.378 subnqn: nqn.2016-06.io.spdk:cnode4 00:06:59.378 traddr: 10.0.0.2 00:06:59.378 eflags: none 00:06:59.378 sectype: none 00:06:59.378 =====Discovery Log Entry 5====== 00:06:59.378 trtype: tcp 00:06:59.378 adrfam: ipv4 00:06:59.378 subtype: discovery subsystem referral 00:06:59.378 treq: not required 00:06:59.378 portid: 0 00:06:59.378 trsvcid: 4430 00:06:59.378 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:06:59.378 traddr: 10.0.0.2 00:06:59.378 eflags: none 00:06:59.378 sectype: none 00:06:59.378 15:30:38 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:06:59.378 Perform nvmf subsystem discovery via RPC 00:06:59.378 15:30:38 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:06:59.378 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.378 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.378 [2024-07-10 15:30:38.509779] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:06:59.378 [ 00:06:59.378 { 00:06:59.378 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:06:59.378 "subtype": "Discovery", 00:06:59.378 "listen_addresses": [ 00:06:59.378 { 00:06:59.378 "transport": "TCP", 00:06:59.378 "trtype": "TCP", 00:06:59.378 "adrfam": "IPv4", 00:06:59.378 "traddr": "10.0.0.2", 00:06:59.378 "trsvcid": "4420" 00:06:59.378 } 00:06:59.378 ], 00:06:59.378 "allow_any_host": true, 00:06:59.378 "hosts": [] 00:06:59.378 }, 00:06:59.378 { 00:06:59.378 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:06:59.378 "subtype": "NVMe", 00:06:59.378 "listen_addresses": [ 00:06:59.378 { 00:06:59.378 "transport": "TCP", 00:06:59.378 "trtype": "TCP", 00:06:59.378 "adrfam": "IPv4", 00:06:59.378 "traddr": "10.0.0.2", 00:06:59.378 "trsvcid": "4420" 00:06:59.378 } 00:06:59.378 ], 00:06:59.378 "allow_any_host": true, 00:06:59.378 "hosts": [], 00:06:59.378 "serial_number": "SPDK00000000000001", 00:06:59.378 "model_number": "SPDK bdev Controller", 00:06:59.378 "max_namespaces": 32, 00:06:59.378 "min_cntlid": 1, 00:06:59.378 "max_cntlid": 65519, 00:06:59.378 "namespaces": [ 00:06:59.378 { 00:06:59.378 "nsid": 1, 00:06:59.378 "bdev_name": "Null1", 00:06:59.378 "name": "Null1", 00:06:59.378 "nguid": "566D53794AFC48ADB866382E477A3F87", 00:06:59.378 "uuid": "566d5379-4afc-48ad-b866-382e477a3f87" 00:06:59.378 } 00:06:59.378 ] 00:06:59.378 }, 00:06:59.378 { 00:06:59.378 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:06:59.378 "subtype": "NVMe", 00:06:59.378 "listen_addresses": [ 00:06:59.378 { 00:06:59.378 "transport": "TCP", 00:06:59.378 "trtype": "TCP", 00:06:59.378 "adrfam": "IPv4", 00:06:59.378 "traddr": "10.0.0.2", 00:06:59.378 "trsvcid": "4420" 00:06:59.378 } 00:06:59.378 ], 00:06:59.378 "allow_any_host": true, 00:06:59.378 "hosts": [], 00:06:59.378 "serial_number": "SPDK00000000000002", 00:06:59.378 "model_number": "SPDK bdev Controller", 00:06:59.378 "max_namespaces": 32, 00:06:59.378 "min_cntlid": 1, 00:06:59.378 "max_cntlid": 65519, 00:06:59.378 "namespaces": [ 00:06:59.378 { 00:06:59.378 "nsid": 1, 00:06:59.378 "bdev_name": "Null2", 00:06:59.378 "name": "Null2", 00:06:59.378 "nguid": "6CBA69803ADE4294949C59039295C297", 00:06:59.378 "uuid": "6cba6980-3ade-4294-949c-59039295c297" 00:06:59.378 } 00:06:59.378 ] 00:06:59.378 }, 00:06:59.378 { 00:06:59.378 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:06:59.378 "subtype": "NVMe", 00:06:59.378 "listen_addresses": [ 00:06:59.378 { 00:06:59.378 "transport": "TCP", 00:06:59.378 "trtype": "TCP", 00:06:59.378 "adrfam": "IPv4", 00:06:59.378 "traddr": "10.0.0.2", 00:06:59.378 "trsvcid": "4420" 00:06:59.378 } 00:06:59.378 ], 00:06:59.378 "allow_any_host": true, 00:06:59.378 "hosts": [], 00:06:59.378 "serial_number": "SPDK00000000000003", 00:06:59.378 "model_number": "SPDK bdev Controller", 00:06:59.378 "max_namespaces": 32, 00:06:59.378 "min_cntlid": 1, 00:06:59.378 "max_cntlid": 65519, 00:06:59.378 "namespaces": [ 00:06:59.378 { 00:06:59.379 "nsid": 1, 00:06:59.379 "bdev_name": "Null3", 00:06:59.379 "name": "Null3", 00:06:59.379 "nguid": "89CAE0E925EF417EB59D625F2F836FA4", 00:06:59.379 "uuid": "89cae0e9-25ef-417e-b59d-625f2f836fa4" 00:06:59.379 } 00:06:59.379 ] 00:06:59.379 }, 00:06:59.379 { 00:06:59.379 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:06:59.379 "subtype": "NVMe", 00:06:59.379 "listen_addresses": [ 00:06:59.379 { 00:06:59.379 "transport": "TCP", 00:06:59.379 "trtype": "TCP", 00:06:59.379 "adrfam": "IPv4", 00:06:59.379 "traddr": "10.0.0.2", 00:06:59.379 "trsvcid": "4420" 00:06:59.379 } 00:06:59.379 ], 00:06:59.379 "allow_any_host": true, 00:06:59.379 "hosts": [], 00:06:59.379 "serial_number": "SPDK00000000000004", 00:06:59.379 "model_number": "SPDK bdev Controller", 00:06:59.379 "max_namespaces": 32, 00:06:59.379 "min_cntlid": 1, 00:06:59.379 "max_cntlid": 65519, 00:06:59.379 "namespaces": [ 00:06:59.379 { 00:06:59.379 "nsid": 1, 00:06:59.379 "bdev_name": "Null4", 00:06:59.379 "name": "Null4", 00:06:59.379 "nguid": "F057E17DD11C4F2DA8133DD658682AAD", 00:06:59.379 "uuid": "f057e17d-d11c-4f2d-a813-3dd658682aad" 00:06:59.379 } 00:06:59.379 ] 00:06:59.379 } 00:06:59.379 ] 00:06:59.379 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.379 15:30:38 -- target/discovery.sh@42 -- # seq 1 4 00:06:59.379 15:30:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:59.379 15:30:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:59.379 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.379 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.379 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.379 15:30:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:06:59.379 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.379 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.379 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.379 15:30:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:59.379 15:30:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:06:59.379 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.379 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.379 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.379 15:30:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:06:59.379 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.379 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.379 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.379 15:30:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:59.379 15:30:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:06:59.379 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.379 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.379 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.379 15:30:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:06:59.379 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.379 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.379 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.379 15:30:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:06:59.379 15:30:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:06:59.379 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.379 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.379 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.379 15:30:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:06:59.379 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.379 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.379 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.379 15:30:38 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:06:59.379 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.379 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.379 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.379 15:30:38 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:06:59.379 15:30:38 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:06:59.379 15:30:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:59.379 15:30:38 -- common/autotest_common.sh@10 -- # set +x 00:06:59.379 15:30:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.379 15:30:38 -- target/discovery.sh@49 -- # check_bdevs= 00:06:59.379 15:30:38 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:06:59.379 15:30:38 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:06:59.379 15:30:38 -- target/discovery.sh@57 -- # nvmftestfini 00:06:59.379 15:30:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:59.379 15:30:38 -- nvmf/common.sh@116 -- # sync 00:06:59.379 15:30:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:06:59.379 15:30:38 -- nvmf/common.sh@119 -- # set +e 00:06:59.379 15:30:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:06:59.379 15:30:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:06:59.379 rmmod nvme_tcp 00:06:59.379 rmmod nvme_fabrics 00:06:59.379 rmmod nvme_keyring 00:06:59.379 15:30:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:06:59.379 15:30:38 -- nvmf/common.sh@123 -- # set -e 00:06:59.379 15:30:38 -- nvmf/common.sh@124 -- # return 0 00:06:59.379 15:30:38 -- nvmf/common.sh@477 -- # '[' -n 2021014 ']' 00:06:59.379 15:30:38 -- nvmf/common.sh@478 -- # killprocess 2021014 00:06:59.379 15:30:38 -- common/autotest_common.sh@926 -- # '[' -z 2021014 ']' 00:06:59.379 15:30:38 -- common/autotest_common.sh@930 -- # kill -0 2021014 00:06:59.379 15:30:38 -- common/autotest_common.sh@931 -- # uname 00:06:59.379 15:30:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:59.379 15:30:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2021014 00:06:59.379 15:30:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:59.379 15:30:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:59.379 15:30:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2021014' 00:06:59.379 killing process with pid 2021014 00:06:59.379 15:30:38 -- common/autotest_common.sh@945 -- # kill 2021014 00:06:59.379 [2024-07-10 15:30:38.713290] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:06:59.379 15:30:38 -- common/autotest_common.sh@950 -- # wait 2021014 00:06:59.639 15:30:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:59.639 15:30:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:06:59.639 15:30:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:06:59.639 15:30:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:59.639 15:30:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:06:59.639 15:30:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:59.639 15:30:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:06:59.639 15:30:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.175 15:30:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:02.175 00:07:02.175 real 0m5.989s 00:07:02.175 user 0m6.918s 00:07:02.175 sys 0m1.831s 00:07:02.175 15:30:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.175 15:30:41 -- common/autotest_common.sh@10 -- # set +x 00:07:02.175 ************************************ 00:07:02.175 END TEST nvmf_discovery 00:07:02.175 ************************************ 00:07:02.175 15:30:41 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:02.175 15:30:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:02.175 15:30:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.175 15:30:41 -- common/autotest_common.sh@10 -- # set +x 00:07:02.175 ************************************ 00:07:02.175 START TEST nvmf_referrals 00:07:02.175 ************************************ 00:07:02.175 15:30:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:02.175 * Looking for test storage... 00:07:02.175 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:02.175 15:30:41 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:02.175 15:30:41 -- nvmf/common.sh@7 -- # uname -s 00:07:02.175 15:30:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:02.175 15:30:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:02.175 15:30:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:02.175 15:30:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:02.175 15:30:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:02.175 15:30:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:02.175 15:30:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:02.175 15:30:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:02.175 15:30:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:02.175 15:30:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:02.175 15:30:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:02.175 15:30:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:02.175 15:30:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:02.175 15:30:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:02.175 15:30:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:02.175 15:30:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:02.175 15:30:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:02.175 15:30:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:02.175 15:30:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:02.175 15:30:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.175 15:30:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.175 15:30:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.175 15:30:41 -- paths/export.sh@5 -- # export PATH 00:07:02.175 15:30:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:02.175 15:30:41 -- nvmf/common.sh@46 -- # : 0 00:07:02.175 15:30:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:02.175 15:30:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:02.175 15:30:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:02.175 15:30:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:02.175 15:30:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:02.175 15:30:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:02.175 15:30:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:02.175 15:30:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:02.175 15:30:41 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:02.175 15:30:41 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:02.175 15:30:41 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:02.175 15:30:41 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:02.175 15:30:41 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:02.175 15:30:41 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:02.175 15:30:41 -- target/referrals.sh@37 -- # nvmftestinit 00:07:02.175 15:30:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:02.175 15:30:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:02.175 15:30:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:02.175 15:30:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:02.175 15:30:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:02.175 15:30:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:02.175 15:30:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:02.175 15:30:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:02.175 15:30:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:02.175 15:30:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:02.175 15:30:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:02.175 15:30:41 -- common/autotest_common.sh@10 -- # set +x 00:07:04.074 15:30:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:04.074 15:30:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:04.074 15:30:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:04.074 15:30:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:04.074 15:30:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:04.074 15:30:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:04.074 15:30:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:04.074 15:30:43 -- nvmf/common.sh@294 -- # net_devs=() 00:07:04.074 15:30:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:04.074 15:30:43 -- nvmf/common.sh@295 -- # e810=() 00:07:04.074 15:30:43 -- nvmf/common.sh@295 -- # local -ga e810 00:07:04.074 15:30:43 -- nvmf/common.sh@296 -- # x722=() 00:07:04.074 15:30:43 -- nvmf/common.sh@296 -- # local -ga x722 00:07:04.074 15:30:43 -- nvmf/common.sh@297 -- # mlx=() 00:07:04.074 15:30:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:04.074 15:30:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:04.074 15:30:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:04.074 15:30:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:04.074 15:30:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:04.075 15:30:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:04.075 15:30:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:04.075 15:30:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:04.075 15:30:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:04.075 15:30:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:04.075 15:30:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:04.075 15:30:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:04.075 15:30:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:04.075 15:30:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:04.075 15:30:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:04.075 15:30:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:04.075 15:30:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:04.075 15:30:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:04.075 15:30:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:04.075 15:30:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:04.075 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:04.075 15:30:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:04.075 15:30:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:04.075 15:30:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.075 15:30:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.075 15:30:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:04.075 15:30:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:04.075 15:30:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:04.075 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:04.075 15:30:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:04.075 15:30:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:04.075 15:30:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:04.075 15:30:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:04.075 15:30:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:04.075 15:30:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:04.075 15:30:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:04.075 15:30:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:04.075 15:30:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:04.075 15:30:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.075 15:30:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:04.075 15:30:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.075 15:30:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:04.075 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:04.075 15:30:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.075 15:30:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:04.075 15:30:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:04.075 15:30:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:04.075 15:30:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:04.075 15:30:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:04.075 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:04.075 15:30:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:04.075 15:30:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:04.075 15:30:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:04.075 15:30:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:04.075 15:30:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:04.075 15:30:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:04.075 15:30:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:04.075 15:30:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:04.075 15:30:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:04.075 15:30:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:04.075 15:30:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:04.075 15:30:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:04.075 15:30:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:04.075 15:30:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:04.075 15:30:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:04.075 15:30:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:04.075 15:30:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:04.075 15:30:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:04.075 15:30:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:04.075 15:30:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:04.075 15:30:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:04.075 15:30:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:04.075 15:30:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:04.075 15:30:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:04.075 15:30:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:04.075 15:30:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:04.075 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:04.075 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:07:04.075 00:07:04.075 --- 10.0.0.2 ping statistics --- 00:07:04.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.075 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:07:04.075 15:30:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:04.075 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:04.075 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:07:04.075 00:07:04.075 --- 10.0.0.1 ping statistics --- 00:07:04.075 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:04.075 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:07:04.075 15:30:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:04.075 15:30:43 -- nvmf/common.sh@410 -- # return 0 00:07:04.075 15:30:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:04.075 15:30:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:04.075 15:30:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:04.075 15:30:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:04.075 15:30:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:04.075 15:30:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:04.075 15:30:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:04.075 15:30:43 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:04.075 15:30:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:04.075 15:30:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:04.075 15:30:43 -- common/autotest_common.sh@10 -- # set +x 00:07:04.075 15:30:43 -- nvmf/common.sh@469 -- # nvmfpid=2023133 00:07:04.075 15:30:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:04.075 15:30:43 -- nvmf/common.sh@470 -- # waitforlisten 2023133 00:07:04.075 15:30:43 -- common/autotest_common.sh@819 -- # '[' -z 2023133 ']' 00:07:04.075 15:30:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.075 15:30:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:04.075 15:30:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.075 15:30:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:04.075 15:30:43 -- common/autotest_common.sh@10 -- # set +x 00:07:04.075 [2024-07-10 15:30:43.412523] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:04.075 [2024-07-10 15:30:43.412609] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:04.075 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.331 [2024-07-10 15:30:43.481308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.331 [2024-07-10 15:30:43.591481] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:04.331 [2024-07-10 15:30:43.591637] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:04.331 [2024-07-10 15:30:43.591654] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:04.331 [2024-07-10 15:30:43.591668] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:04.331 [2024-07-10 15:30:43.591736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.331 [2024-07-10 15:30:43.591857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.331 [2024-07-10 15:30:43.591935] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.331 [2024-07-10 15:30:43.591932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:05.306 15:30:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:05.306 15:30:44 -- common/autotest_common.sh@852 -- # return 0 00:07:05.306 15:30:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:05.306 15:30:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:05.306 15:30:44 -- common/autotest_common.sh@10 -- # set +x 00:07:05.306 15:30:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:05.306 15:30:44 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:05.306 15:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.306 15:30:44 -- common/autotest_common.sh@10 -- # set +x 00:07:05.306 [2024-07-10 15:30:44.434085] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:05.306 15:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.306 15:30:44 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:05.306 15:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.306 15:30:44 -- common/autotest_common.sh@10 -- # set +x 00:07:05.306 [2024-07-10 15:30:44.446259] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:05.306 15:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.306 15:30:44 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:05.306 15:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.306 15:30:44 -- common/autotest_common.sh@10 -- # set +x 00:07:05.306 15:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.306 15:30:44 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:05.306 15:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.306 15:30:44 -- common/autotest_common.sh@10 -- # set +x 00:07:05.306 15:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.306 15:30:44 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:05.306 15:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.306 15:30:44 -- common/autotest_common.sh@10 -- # set +x 00:07:05.306 15:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.306 15:30:44 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:05.306 15:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.306 15:30:44 -- target/referrals.sh@48 -- # jq length 00:07:05.306 15:30:44 -- common/autotest_common.sh@10 -- # set +x 00:07:05.306 15:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.306 15:30:44 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:05.306 15:30:44 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:05.306 15:30:44 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:05.306 15:30:44 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:05.306 15:30:44 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:05.306 15:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.306 15:30:44 -- common/autotest_common.sh@10 -- # set +x 00:07:05.306 15:30:44 -- target/referrals.sh@21 -- # sort 00:07:05.306 15:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.306 15:30:44 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:05.306 15:30:44 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:05.306 15:30:44 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:05.306 15:30:44 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:05.306 15:30:44 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:05.306 15:30:44 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:05.306 15:30:44 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:05.307 15:30:44 -- target/referrals.sh@26 -- # sort 00:07:05.563 15:30:44 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:05.563 15:30:44 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:05.563 15:30:44 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:05.563 15:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.563 15:30:44 -- common/autotest_common.sh@10 -- # set +x 00:07:05.563 15:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.563 15:30:44 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:05.563 15:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.563 15:30:44 -- common/autotest_common.sh@10 -- # set +x 00:07:05.563 15:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.563 15:30:44 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:05.563 15:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.563 15:30:44 -- common/autotest_common.sh@10 -- # set +x 00:07:05.563 15:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.563 15:30:44 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:05.563 15:30:44 -- target/referrals.sh@56 -- # jq length 00:07:05.563 15:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.563 15:30:44 -- common/autotest_common.sh@10 -- # set +x 00:07:05.563 15:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.563 15:30:44 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:05.563 15:30:44 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:05.563 15:30:44 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:05.563 15:30:44 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:05.563 15:30:44 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:05.563 15:30:44 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:05.563 15:30:44 -- target/referrals.sh@26 -- # sort 00:07:05.563 15:30:44 -- target/referrals.sh@26 -- # echo 00:07:05.563 15:30:44 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:05.563 15:30:44 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:05.563 15:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.563 15:30:44 -- common/autotest_common.sh@10 -- # set +x 00:07:05.563 15:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.563 15:30:44 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:05.563 15:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.563 15:30:44 -- common/autotest_common.sh@10 -- # set +x 00:07:05.563 15:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.563 15:30:44 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:05.563 15:30:44 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:05.563 15:30:44 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:05.563 15:30:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:05.563 15:30:44 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:05.563 15:30:44 -- common/autotest_common.sh@10 -- # set +x 00:07:05.563 15:30:44 -- target/referrals.sh@21 -- # sort 00:07:05.563 15:30:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:05.819 15:30:44 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:05.819 15:30:44 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:05.819 15:30:44 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:05.819 15:30:44 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:05.819 15:30:44 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:05.819 15:30:44 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:05.820 15:30:44 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:05.820 15:30:44 -- target/referrals.sh@26 -- # sort 00:07:05.820 15:30:45 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:05.820 15:30:45 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:05.820 15:30:45 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:05.820 15:30:45 -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:05.820 15:30:45 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:05.820 15:30:45 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:05.820 15:30:45 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:06.077 15:30:45 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:06.077 15:30:45 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:06.077 15:30:45 -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:06.077 15:30:45 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:06.077 15:30:45 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:06.077 15:30:45 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:06.077 15:30:45 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:06.077 15:30:45 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:06.077 15:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:06.077 15:30:45 -- common/autotest_common.sh@10 -- # set +x 00:07:06.334 15:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:06.334 15:30:45 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:06.334 15:30:45 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:06.334 15:30:45 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:06.334 15:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:06.334 15:30:45 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:06.334 15:30:45 -- common/autotest_common.sh@10 -- # set +x 00:07:06.334 15:30:45 -- target/referrals.sh@21 -- # sort 00:07:06.334 15:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:06.334 15:30:45 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:06.334 15:30:45 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:06.334 15:30:45 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:06.334 15:30:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:06.334 15:30:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:06.334 15:30:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:06.334 15:30:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:06.334 15:30:45 -- target/referrals.sh@26 -- # sort 00:07:06.334 15:30:45 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:06.334 15:30:45 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:06.334 15:30:45 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:06.334 15:30:45 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:06.334 15:30:45 -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:06.334 15:30:45 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:06.334 15:30:45 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:06.591 15:30:45 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:06.591 15:30:45 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:06.591 15:30:45 -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:06.591 15:30:45 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:06.591 15:30:45 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:06.591 15:30:45 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:06.591 15:30:45 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:06.591 15:30:45 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:06.591 15:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:06.591 15:30:45 -- common/autotest_common.sh@10 -- # set +x 00:07:06.591 15:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:06.591 15:30:45 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:06.591 15:30:45 -- target/referrals.sh@82 -- # jq length 00:07:06.591 15:30:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:06.591 15:30:45 -- common/autotest_common.sh@10 -- # set +x 00:07:06.591 15:30:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:06.591 15:30:45 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:06.591 15:30:45 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:06.591 15:30:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:06.591 15:30:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:06.591 15:30:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:06.591 15:30:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:06.591 15:30:45 -- target/referrals.sh@26 -- # sort 00:07:06.849 15:30:46 -- target/referrals.sh@26 -- # echo 00:07:06.849 15:30:46 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:06.849 15:30:46 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:06.849 15:30:46 -- target/referrals.sh@86 -- # nvmftestfini 00:07:06.849 15:30:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:07:06.849 15:30:46 -- nvmf/common.sh@116 -- # sync 00:07:06.849 15:30:46 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:07:06.849 15:30:46 -- nvmf/common.sh@119 -- # set +e 00:07:06.849 15:30:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:07:06.849 15:30:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:07:06.849 rmmod nvme_tcp 00:07:06.849 rmmod nvme_fabrics 00:07:06.849 rmmod nvme_keyring 00:07:06.849 15:30:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:07:06.849 15:30:46 -- nvmf/common.sh@123 -- # set -e 00:07:06.849 15:30:46 -- nvmf/common.sh@124 -- # return 0 00:07:06.849 15:30:46 -- nvmf/common.sh@477 -- # '[' -n 2023133 ']' 00:07:06.849 15:30:46 -- nvmf/common.sh@478 -- # killprocess 2023133 00:07:06.849 15:30:46 -- common/autotest_common.sh@926 -- # '[' -z 2023133 ']' 00:07:06.849 15:30:46 -- common/autotest_common.sh@930 -- # kill -0 2023133 00:07:06.849 15:30:46 -- common/autotest_common.sh@931 -- # uname 00:07:06.849 15:30:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:06.849 15:30:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2023133 00:07:06.849 15:30:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:06.849 15:30:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:06.849 15:30:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2023133' 00:07:06.849 killing process with pid 2023133 00:07:06.849 15:30:46 -- common/autotest_common.sh@945 -- # kill 2023133 00:07:06.849 15:30:46 -- common/autotest_common.sh@950 -- # wait 2023133 00:07:07.106 15:30:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:07:07.106 15:30:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:07:07.106 15:30:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:07:07.106 15:30:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:07.106 15:30:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:07:07.106 15:30:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.106 15:30:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:07.106 15:30:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.647 15:30:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:07:09.647 00:07:09.647 real 0m7.432s 00:07:09.647 user 0m12.889s 00:07:09.647 sys 0m2.149s 00:07:09.647 15:30:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.647 15:30:48 -- common/autotest_common.sh@10 -- # set +x 00:07:09.647 ************************************ 00:07:09.647 END TEST nvmf_referrals 00:07:09.647 ************************************ 00:07:09.647 15:30:48 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:09.647 15:30:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:09.647 15:30:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.647 15:30:48 -- common/autotest_common.sh@10 -- # set +x 00:07:09.647 ************************************ 00:07:09.647 START TEST nvmf_connect_disconnect 00:07:09.647 ************************************ 00:07:09.647 15:30:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:07:09.647 * Looking for test storage... 00:07:09.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:09.647 15:30:48 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:09.647 15:30:48 -- nvmf/common.sh@7 -- # uname -s 00:07:09.647 15:30:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:09.647 15:30:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:09.647 15:30:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:09.647 15:30:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:09.647 15:30:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:09.647 15:30:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:09.647 15:30:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:09.647 15:30:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:09.647 15:30:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:09.647 15:30:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:09.647 15:30:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:09.647 15:30:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:09.647 15:30:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:09.647 15:30:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:09.647 15:30:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:09.647 15:30:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:09.647 15:30:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:09.647 15:30:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:09.647 15:30:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:09.647 15:30:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.647 15:30:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.647 15:30:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.647 15:30:48 -- paths/export.sh@5 -- # export PATH 00:07:09.647 15:30:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:09.647 15:30:48 -- nvmf/common.sh@46 -- # : 0 00:07:09.647 15:30:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:09.647 15:30:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:09.647 15:30:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:09.647 15:30:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:09.647 15:30:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:09.647 15:30:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:09.647 15:30:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:09.647 15:30:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:09.647 15:30:48 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:09.647 15:30:48 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:09.647 15:30:48 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:07:09.647 15:30:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:09.647 15:30:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:09.647 15:30:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:09.647 15:30:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:09.647 15:30:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:09.647 15:30:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:09.647 15:30:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:09.647 15:30:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:09.647 15:30:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:07:09.647 15:30:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:07:09.647 15:30:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:07:09.647 15:30:48 -- common/autotest_common.sh@10 -- # set +x 00:07:11.551 15:30:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:07:11.551 15:30:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:07:11.551 15:30:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:07:11.551 15:30:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:07:11.551 15:30:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:07:11.551 15:30:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:07:11.551 15:30:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:07:11.551 15:30:50 -- nvmf/common.sh@294 -- # net_devs=() 00:07:11.551 15:30:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:07:11.551 15:30:50 -- nvmf/common.sh@295 -- # e810=() 00:07:11.551 15:30:50 -- nvmf/common.sh@295 -- # local -ga e810 00:07:11.551 15:30:50 -- nvmf/common.sh@296 -- # x722=() 00:07:11.551 15:30:50 -- nvmf/common.sh@296 -- # local -ga x722 00:07:11.551 15:30:50 -- nvmf/common.sh@297 -- # mlx=() 00:07:11.551 15:30:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:07:11.551 15:30:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:11.551 15:30:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:11.551 15:30:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:11.551 15:30:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:11.551 15:30:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:11.551 15:30:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:11.551 15:30:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:11.551 15:30:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:11.551 15:30:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:11.551 15:30:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:11.551 15:30:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:11.551 15:30:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:07:11.551 15:30:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:07:11.551 15:30:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:07:11.551 15:30:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:07:11.551 15:30:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:07:11.551 15:30:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:07:11.551 15:30:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:11.551 15:30:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:11.551 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:11.551 15:30:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:11.551 15:30:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:11.551 15:30:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.551 15:30:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.551 15:30:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:11.551 15:30:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:07:11.551 15:30:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:11.551 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:11.551 15:30:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:07:11.551 15:30:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:07:11.551 15:30:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:11.551 15:30:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:11.551 15:30:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:07:11.551 15:30:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:07:11.551 15:30:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:07:11.551 15:30:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:07:11.551 15:30:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:11.551 15:30:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.551 15:30:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:11.551 15:30:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.551 15:30:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:11.551 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:11.551 15:30:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.551 15:30:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:07:11.551 15:30:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:11.551 15:30:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:07:11.551 15:30:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:11.551 15:30:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:11.551 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:11.551 15:30:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:07:11.551 15:30:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:07:11.551 15:30:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:07:11.551 15:30:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:07:11.551 15:30:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:07:11.551 15:30:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:07:11.551 15:30:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:11.551 15:30:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:11.551 15:30:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:11.551 15:30:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:07:11.551 15:30:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:11.551 15:30:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:11.551 15:30:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:07:11.551 15:30:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:11.551 15:30:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:11.551 15:30:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:07:11.551 15:30:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:07:11.551 15:30:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:07:11.551 15:30:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:11.551 15:30:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:11.551 15:30:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:11.551 15:30:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:07:11.551 15:30:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:11.551 15:30:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:11.551 15:30:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:11.551 15:30:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:07:11.551 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:11.551 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:07:11.551 00:07:11.551 --- 10.0.0.2 ping statistics --- 00:07:11.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.551 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:07:11.551 15:30:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:11.551 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:11.551 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:07:11.551 00:07:11.551 --- 10.0.0.1 ping statistics --- 00:07:11.551 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:11.551 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:07:11.551 15:30:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:11.551 15:30:50 -- nvmf/common.sh@410 -- # return 0 00:07:11.551 15:30:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:11.551 15:30:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:11.551 15:30:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:11.551 15:30:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:11.551 15:30:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:11.551 15:30:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:11.551 15:30:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:11.551 15:30:50 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:07:11.551 15:30:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:07:11.551 15:30:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:11.551 15:30:50 -- common/autotest_common.sh@10 -- # set +x 00:07:11.551 15:30:50 -- nvmf/common.sh@469 -- # nvmfpid=2025581 00:07:11.551 15:30:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:11.551 15:30:50 -- nvmf/common.sh@470 -- # waitforlisten 2025581 00:07:11.551 15:30:50 -- common/autotest_common.sh@819 -- # '[' -z 2025581 ']' 00:07:11.551 15:30:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.551 15:30:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:11.551 15:30:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.551 15:30:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:11.551 15:30:50 -- common/autotest_common.sh@10 -- # set +x 00:07:11.552 [2024-07-10 15:30:50.880732] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:11.552 [2024-07-10 15:30:50.880817] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:11.552 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.809 [2024-07-10 15:30:50.960617] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:11.809 [2024-07-10 15:30:51.085294] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:11.809 [2024-07-10 15:30:51.085467] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:11.809 [2024-07-10 15:30:51.085489] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:11.809 [2024-07-10 15:30:51.085504] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:11.809 [2024-07-10 15:30:51.085572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.809 [2024-07-10 15:30:51.085601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.809 [2024-07-10 15:30:51.085654] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:11.809 [2024-07-10 15:30:51.085658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.741 15:30:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:12.741 15:30:51 -- common/autotest_common.sh@852 -- # return 0 00:07:12.741 15:30:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:07:12.741 15:30:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:12.741 15:30:51 -- common/autotest_common.sh@10 -- # set +x 00:07:12.741 15:30:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:12.741 15:30:51 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:12.741 15:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.741 15:30:51 -- common/autotest_common.sh@10 -- # set +x 00:07:12.741 [2024-07-10 15:30:51.882955] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:12.741 15:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:12.741 15:30:51 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:07:12.741 15:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.741 15:30:51 -- common/autotest_common.sh@10 -- # set +x 00:07:12.741 15:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:12.741 15:30:51 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:07:12.741 15:30:51 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:12.741 15:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.741 15:30:51 -- common/autotest_common.sh@10 -- # set +x 00:07:12.741 15:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:12.741 15:30:51 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:12.741 15:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.741 15:30:51 -- common/autotest_common.sh@10 -- # set +x 00:07:12.741 15:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:12.741 15:30:51 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:12.741 15:30:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:12.741 15:30:51 -- common/autotest_common.sh@10 -- # set +x 00:07:12.741 [2024-07-10 15:30:51.939791] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.741 15:30:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:12.741 15:30:51 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:07:12.741 15:30:51 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:07:12.741 15:30:51 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:07:12.741 15:30:51 -- target/connect_disconnect.sh@34 -- # set +x 00:07:15.265 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:17.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:19.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:22.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:24.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:26.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:29.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:31.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:33.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:36.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:38.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:40.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:43.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:45.650 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:47.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:50.062 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:52.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:54.481 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:57.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:59.534 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:01.433 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:03.961 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:05.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.330 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.836 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.784 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.358 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.304 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.255 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:50.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.726 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.623 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.626 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.150 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.049 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.522 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:18.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.975 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:27.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.926 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:34.449 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.345 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:41.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.295 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.910 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.807 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.855 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.751 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.160 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.101 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.072 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.029 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.555 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.970 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.447 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.716 15:34:43 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:03.716 15:34:43 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:03.716 15:34:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:03.716 15:34:43 -- nvmf/common.sh@116 -- # sync 00:11:03.716 15:34:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:03.716 15:34:43 -- nvmf/common.sh@119 -- # set +e 00:11:03.716 15:34:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:03.716 15:34:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:03.716 rmmod nvme_tcp 00:11:03.716 rmmod nvme_fabrics 00:11:03.716 rmmod nvme_keyring 00:11:03.716 15:34:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:03.716 15:34:43 -- nvmf/common.sh@123 -- # set -e 00:11:03.716 15:34:43 -- nvmf/common.sh@124 -- # return 0 00:11:03.716 15:34:43 -- nvmf/common.sh@477 -- # '[' -n 2025581 ']' 00:11:03.716 15:34:43 -- nvmf/common.sh@478 -- # killprocess 2025581 00:11:03.716 15:34:43 -- common/autotest_common.sh@926 -- # '[' -z 2025581 ']' 00:11:03.716 15:34:43 -- common/autotest_common.sh@930 -- # kill -0 2025581 00:11:03.716 15:34:43 -- common/autotest_common.sh@931 -- # uname 00:11:03.973 15:34:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:03.973 15:34:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2025581 00:11:03.973 15:34:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:03.973 15:34:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:03.973 15:34:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2025581' 00:11:03.973 killing process with pid 2025581 00:11:03.973 15:34:43 -- common/autotest_common.sh@945 -- # kill 2025581 00:11:03.973 15:34:43 -- common/autotest_common.sh@950 -- # wait 2025581 00:11:04.232 15:34:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:04.232 15:34:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:04.232 15:34:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:04.232 15:34:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:04.232 15:34:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:04.232 15:34:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.232 15:34:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:04.232 15:34:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.141 15:34:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:06.141 00:11:06.141 real 3m56.936s 00:11:06.141 user 15m1.300s 00:11:06.141 sys 0m35.429s 00:11:06.141 15:34:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:06.141 15:34:45 -- common/autotest_common.sh@10 -- # set +x 00:11:06.141 ************************************ 00:11:06.141 END TEST nvmf_connect_disconnect 00:11:06.141 ************************************ 00:11:06.141 15:34:45 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:06.141 15:34:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:06.141 15:34:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:06.141 15:34:45 -- common/autotest_common.sh@10 -- # set +x 00:11:06.141 ************************************ 00:11:06.141 START TEST nvmf_multitarget 00:11:06.141 ************************************ 00:11:06.141 15:34:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:06.399 * Looking for test storage... 00:11:06.399 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:06.399 15:34:45 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:06.399 15:34:45 -- nvmf/common.sh@7 -- # uname -s 00:11:06.399 15:34:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:06.399 15:34:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:06.399 15:34:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:06.399 15:34:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:06.399 15:34:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:06.399 15:34:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:06.399 15:34:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:06.399 15:34:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:06.399 15:34:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:06.399 15:34:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:06.399 15:34:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:06.399 15:34:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:06.399 15:34:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:06.399 15:34:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:06.399 15:34:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:06.399 15:34:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:06.399 15:34:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:06.399 15:34:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:06.399 15:34:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:06.399 15:34:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.399 15:34:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.399 15:34:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.399 15:34:45 -- paths/export.sh@5 -- # export PATH 00:11:06.399 15:34:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:06.399 15:34:45 -- nvmf/common.sh@46 -- # : 0 00:11:06.399 15:34:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:06.399 15:34:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:06.399 15:34:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:06.399 15:34:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:06.399 15:34:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:06.399 15:34:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:06.399 15:34:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:06.399 15:34:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:06.399 15:34:45 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:06.399 15:34:45 -- target/multitarget.sh@15 -- # nvmftestinit 00:11:06.399 15:34:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:06.399 15:34:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:06.399 15:34:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:06.399 15:34:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:06.399 15:34:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:06.399 15:34:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:06.399 15:34:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:06.399 15:34:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:06.399 15:34:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:06.399 15:34:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:06.399 15:34:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:06.399 15:34:45 -- common/autotest_common.sh@10 -- # set +x 00:11:08.351 15:34:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:08.351 15:34:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:08.351 15:34:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:08.351 15:34:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:08.351 15:34:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:08.351 15:34:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:08.351 15:34:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:08.351 15:34:47 -- nvmf/common.sh@294 -- # net_devs=() 00:11:08.351 15:34:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:08.351 15:34:47 -- nvmf/common.sh@295 -- # e810=() 00:11:08.351 15:34:47 -- nvmf/common.sh@295 -- # local -ga e810 00:11:08.351 15:34:47 -- nvmf/common.sh@296 -- # x722=() 00:11:08.351 15:34:47 -- nvmf/common.sh@296 -- # local -ga x722 00:11:08.351 15:34:47 -- nvmf/common.sh@297 -- # mlx=() 00:11:08.351 15:34:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:08.351 15:34:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.351 15:34:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.351 15:34:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.351 15:34:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.351 15:34:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.351 15:34:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.351 15:34:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.351 15:34:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.351 15:34:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.351 15:34:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.351 15:34:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.351 15:34:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:08.351 15:34:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:08.351 15:34:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:08.351 15:34:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:08.351 15:34:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:08.351 15:34:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:08.351 15:34:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:08.351 15:34:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:08.351 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:08.351 15:34:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:08.351 15:34:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:08.351 15:34:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.351 15:34:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.351 15:34:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:08.351 15:34:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:08.351 15:34:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:08.351 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:08.351 15:34:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:08.351 15:34:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:08.351 15:34:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.351 15:34:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.351 15:34:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:08.351 15:34:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:08.351 15:34:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:08.351 15:34:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:08.351 15:34:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:08.351 15:34:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.351 15:34:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:08.351 15:34:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.351 15:34:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:08.351 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:08.351 15:34:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.351 15:34:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:08.351 15:34:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.351 15:34:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:08.351 15:34:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.351 15:34:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:08.351 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:08.351 15:34:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.351 15:34:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:08.351 15:34:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:08.351 15:34:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:08.351 15:34:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:08.351 15:34:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:08.351 15:34:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.351 15:34:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.351 15:34:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:08.351 15:34:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:08.351 15:34:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:08.351 15:34:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:08.351 15:34:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:08.351 15:34:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:08.352 15:34:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.352 15:34:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:08.352 15:34:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:08.352 15:34:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:08.352 15:34:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:08.352 15:34:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:08.352 15:34:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:08.352 15:34:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:08.352 15:34:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:08.352 15:34:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:08.352 15:34:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:08.352 15:34:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:08.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.374 ms 00:11:08.352 00:11:08.352 --- 10.0.0.2 ping statistics --- 00:11:08.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.352 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:11:08.352 15:34:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:08.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:11:08.352 00:11:08.352 --- 10.0.0.1 ping statistics --- 00:11:08.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.352 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:11:08.352 15:34:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.352 15:34:47 -- nvmf/common.sh@410 -- # return 0 00:11:08.352 15:34:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:08.352 15:34:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.352 15:34:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:08.352 15:34:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:08.352 15:34:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.352 15:34:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:08.352 15:34:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:08.352 15:34:47 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:08.352 15:34:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:08.352 15:34:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:08.352 15:34:47 -- common/autotest_common.sh@10 -- # set +x 00:11:08.352 15:34:47 -- nvmf/common.sh@469 -- # nvmfpid=2057615 00:11:08.352 15:34:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:08.352 15:34:47 -- nvmf/common.sh@470 -- # waitforlisten 2057615 00:11:08.352 15:34:47 -- common/autotest_common.sh@819 -- # '[' -z 2057615 ']' 00:11:08.352 15:34:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.352 15:34:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:08.352 15:34:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.352 15:34:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:08.352 15:34:47 -- common/autotest_common.sh@10 -- # set +x 00:11:08.616 [2024-07-10 15:34:47.738510] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:08.616 [2024-07-10 15:34:47.738589] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.616 EAL: No free 2048 kB hugepages reported on node 1 00:11:08.616 [2024-07-10 15:34:47.803480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:08.616 [2024-07-10 15:34:47.913915] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:08.616 [2024-07-10 15:34:47.914070] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.616 [2024-07-10 15:34:47.914094] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.616 [2024-07-10 15:34:47.914107] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.616 [2024-07-10 15:34:47.914187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.616 [2024-07-10 15:34:47.914272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:08.616 [2024-07-10 15:34:47.914320] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:08.616 [2024-07-10 15:34:47.914323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.549 15:34:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:09.549 15:34:48 -- common/autotest_common.sh@852 -- # return 0 00:11:09.549 15:34:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:09.549 15:34:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:09.549 15:34:48 -- common/autotest_common.sh@10 -- # set +x 00:11:09.549 15:34:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:09.549 15:34:48 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:09.549 15:34:48 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:09.549 15:34:48 -- target/multitarget.sh@21 -- # jq length 00:11:09.549 15:34:48 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:09.549 15:34:48 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:09.549 "nvmf_tgt_1" 00:11:09.549 15:34:48 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:09.806 "nvmf_tgt_2" 00:11:09.806 15:34:49 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:09.806 15:34:49 -- target/multitarget.sh@28 -- # jq length 00:11:09.806 15:34:49 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:09.806 15:34:49 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:10.063 true 00:11:10.063 15:34:49 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:10.063 true 00:11:10.063 15:34:49 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:10.063 15:34:49 -- target/multitarget.sh@35 -- # jq length 00:11:10.322 15:34:49 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:10.322 15:34:49 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:10.322 15:34:49 -- target/multitarget.sh@41 -- # nvmftestfini 00:11:10.322 15:34:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:10.322 15:34:49 -- nvmf/common.sh@116 -- # sync 00:11:10.322 15:34:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:10.322 15:34:49 -- nvmf/common.sh@119 -- # set +e 00:11:10.322 15:34:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:10.322 15:34:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:10.322 rmmod nvme_tcp 00:11:10.322 rmmod nvme_fabrics 00:11:10.322 rmmod nvme_keyring 00:11:10.322 15:34:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:10.322 15:34:49 -- nvmf/common.sh@123 -- # set -e 00:11:10.322 15:34:49 -- nvmf/common.sh@124 -- # return 0 00:11:10.322 15:34:49 -- nvmf/common.sh@477 -- # '[' -n 2057615 ']' 00:11:10.322 15:34:49 -- nvmf/common.sh@478 -- # killprocess 2057615 00:11:10.322 15:34:49 -- common/autotest_common.sh@926 -- # '[' -z 2057615 ']' 00:11:10.322 15:34:49 -- common/autotest_common.sh@930 -- # kill -0 2057615 00:11:10.322 15:34:49 -- common/autotest_common.sh@931 -- # uname 00:11:10.322 15:34:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:10.322 15:34:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2057615 00:11:10.322 15:34:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:10.322 15:34:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:10.322 15:34:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2057615' 00:11:10.322 killing process with pid 2057615 00:11:10.322 15:34:49 -- common/autotest_common.sh@945 -- # kill 2057615 00:11:10.322 15:34:49 -- common/autotest_common.sh@950 -- # wait 2057615 00:11:10.581 15:34:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:10.581 15:34:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:10.581 15:34:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:10.581 15:34:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:10.581 15:34:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:10.581 15:34:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.581 15:34:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:10.581 15:34:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.117 15:34:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:13.117 00:11:13.117 real 0m6.399s 00:11:13.117 user 0m9.126s 00:11:13.117 sys 0m1.978s 00:11:13.117 15:34:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:13.117 15:34:51 -- common/autotest_common.sh@10 -- # set +x 00:11:13.117 ************************************ 00:11:13.117 END TEST nvmf_multitarget 00:11:13.117 ************************************ 00:11:13.117 15:34:51 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:13.117 15:34:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:13.117 15:34:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:13.117 15:34:51 -- common/autotest_common.sh@10 -- # set +x 00:11:13.117 ************************************ 00:11:13.117 START TEST nvmf_rpc 00:11:13.117 ************************************ 00:11:13.117 15:34:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:13.117 * Looking for test storage... 00:11:13.117 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.117 15:34:51 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.117 15:34:51 -- nvmf/common.sh@7 -- # uname -s 00:11:13.117 15:34:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.117 15:34:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.117 15:34:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.117 15:34:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.117 15:34:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:13.117 15:34:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:13.117 15:34:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.117 15:34:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:13.117 15:34:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.117 15:34:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:13.117 15:34:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:13.117 15:34:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:13.117 15:34:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.117 15:34:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:13.117 15:34:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:13.117 15:34:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:13.117 15:34:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.117 15:34:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.117 15:34:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.117 15:34:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.117 15:34:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.117 15:34:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.117 15:34:51 -- paths/export.sh@5 -- # export PATH 00:11:13.117 15:34:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.117 15:34:51 -- nvmf/common.sh@46 -- # : 0 00:11:13.117 15:34:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:13.117 15:34:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:13.117 15:34:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:13.117 15:34:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.117 15:34:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.117 15:34:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:13.117 15:34:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:13.117 15:34:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:13.117 15:34:51 -- target/rpc.sh@11 -- # loops=5 00:11:13.117 15:34:51 -- target/rpc.sh@23 -- # nvmftestinit 00:11:13.117 15:34:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:13.117 15:34:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.117 15:34:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:13.117 15:34:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:13.117 15:34:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:13.117 15:34:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:13.117 15:34:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:13.117 15:34:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:13.117 15:34:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:13.117 15:34:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:13.117 15:34:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:13.117 15:34:51 -- common/autotest_common.sh@10 -- # set +x 00:11:14.492 15:34:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:14.492 15:34:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:14.492 15:34:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:14.492 15:34:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:14.492 15:34:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:14.492 15:34:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:14.492 15:34:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:14.492 15:34:53 -- nvmf/common.sh@294 -- # net_devs=() 00:11:14.492 15:34:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:14.492 15:34:53 -- nvmf/common.sh@295 -- # e810=() 00:11:14.492 15:34:53 -- nvmf/common.sh@295 -- # local -ga e810 00:11:14.492 15:34:53 -- nvmf/common.sh@296 -- # x722=() 00:11:14.492 15:34:53 -- nvmf/common.sh@296 -- # local -ga x722 00:11:14.492 15:34:53 -- nvmf/common.sh@297 -- # mlx=() 00:11:14.492 15:34:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:14.492 15:34:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:14.492 15:34:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:14.492 15:34:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:14.492 15:34:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:14.492 15:34:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:14.492 15:34:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:14.492 15:34:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:14.492 15:34:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:14.492 15:34:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:14.492 15:34:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:14.492 15:34:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:14.492 15:34:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:14.492 15:34:53 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:14.492 15:34:53 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:14.492 15:34:53 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:14.492 15:34:53 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:14.492 15:34:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:14.492 15:34:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:14.492 15:34:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:14.492 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:14.492 15:34:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:14.492 15:34:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:14.492 15:34:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.492 15:34:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.492 15:34:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:14.492 15:34:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:14.492 15:34:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:14.492 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:14.492 15:34:53 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:14.492 15:34:53 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:14.492 15:34:53 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:14.492 15:34:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:14.492 15:34:53 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:14.492 15:34:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:14.492 15:34:53 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:14.492 15:34:53 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:14.492 15:34:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:14.492 15:34:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.492 15:34:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:14.492 15:34:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.492 15:34:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:14.492 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:14.492 15:34:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.492 15:34:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:14.492 15:34:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:14.492 15:34:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:14.492 15:34:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:14.492 15:34:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:14.492 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:14.492 15:34:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:14.492 15:34:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:14.493 15:34:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:14.493 15:34:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:14.493 15:34:53 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:14.493 15:34:53 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:14.493 15:34:53 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.493 15:34:53 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.493 15:34:53 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:14.493 15:34:53 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:14.493 15:34:53 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:14.493 15:34:53 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:14.493 15:34:53 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:14.493 15:34:53 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:14.493 15:34:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.493 15:34:53 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:14.493 15:34:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:14.493 15:34:53 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:14.493 15:34:53 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:14.493 15:34:53 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:14.493 15:34:53 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:14.751 15:34:53 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:14.751 15:34:53 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:14.751 15:34:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:14.751 15:34:53 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:14.751 15:34:53 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:14.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:11:14.751 00:11:14.751 --- 10.0.0.2 ping statistics --- 00:11:14.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.751 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:11:14.751 15:34:53 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:14.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:11:14.751 00:11:14.751 --- 10.0.0.1 ping statistics --- 00:11:14.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.751 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:11:14.751 15:34:53 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.751 15:34:53 -- nvmf/common.sh@410 -- # return 0 00:11:14.751 15:34:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:14.751 15:34:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.751 15:34:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:14.751 15:34:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:14.751 15:34:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.751 15:34:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:14.751 15:34:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:14.751 15:34:53 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:14.751 15:34:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:14.751 15:34:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:14.751 15:34:53 -- common/autotest_common.sh@10 -- # set +x 00:11:14.751 15:34:53 -- nvmf/common.sh@469 -- # nvmfpid=2059862 00:11:14.751 15:34:53 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:14.751 15:34:53 -- nvmf/common.sh@470 -- # waitforlisten 2059862 00:11:14.751 15:34:53 -- common/autotest_common.sh@819 -- # '[' -z 2059862 ']' 00:11:14.751 15:34:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.751 15:34:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:14.751 15:34:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.751 15:34:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:14.751 15:34:53 -- common/autotest_common.sh@10 -- # set +x 00:11:14.751 [2024-07-10 15:34:54.000976] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:14.751 [2024-07-10 15:34:54.001061] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:14.751 EAL: No free 2048 kB hugepages reported on node 1 00:11:14.751 [2024-07-10 15:34:54.070530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.010 [2024-07-10 15:34:54.190947] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:15.010 [2024-07-10 15:34:54.191101] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.010 [2024-07-10 15:34:54.191120] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.010 [2024-07-10 15:34:54.191134] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.010 [2024-07-10 15:34:54.191198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.010 [2024-07-10 15:34:54.191255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.010 [2024-07-10 15:34:54.191304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.010 [2024-07-10 15:34:54.191308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.943 15:34:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:15.943 15:34:54 -- common/autotest_common.sh@852 -- # return 0 00:11:15.943 15:34:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:15.943 15:34:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:15.943 15:34:54 -- common/autotest_common.sh@10 -- # set +x 00:11:15.943 15:34:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.943 15:34:54 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:15.944 15:34:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:15.944 15:34:54 -- common/autotest_common.sh@10 -- # set +x 00:11:15.944 15:34:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:15.944 15:34:55 -- target/rpc.sh@26 -- # stats='{ 00:11:15.944 "tick_rate": 2700000000, 00:11:15.944 "poll_groups": [ 00:11:15.944 { 00:11:15.944 "name": "nvmf_tgt_poll_group_0", 00:11:15.944 "admin_qpairs": 0, 00:11:15.944 "io_qpairs": 0, 00:11:15.944 "current_admin_qpairs": 0, 00:11:15.944 "current_io_qpairs": 0, 00:11:15.944 "pending_bdev_io": 0, 00:11:15.944 "completed_nvme_io": 0, 00:11:15.944 "transports": [] 00:11:15.944 }, 00:11:15.944 { 00:11:15.944 "name": "nvmf_tgt_poll_group_1", 00:11:15.944 "admin_qpairs": 0, 00:11:15.944 "io_qpairs": 0, 00:11:15.944 "current_admin_qpairs": 0, 00:11:15.944 "current_io_qpairs": 0, 00:11:15.944 "pending_bdev_io": 0, 00:11:15.944 "completed_nvme_io": 0, 00:11:15.944 "transports": [] 00:11:15.944 }, 00:11:15.944 { 00:11:15.944 "name": "nvmf_tgt_poll_group_2", 00:11:15.944 "admin_qpairs": 0, 00:11:15.944 "io_qpairs": 0, 00:11:15.944 "current_admin_qpairs": 0, 00:11:15.944 "current_io_qpairs": 0, 00:11:15.944 "pending_bdev_io": 0, 00:11:15.944 "completed_nvme_io": 0, 00:11:15.944 "transports": [] 00:11:15.944 }, 00:11:15.944 { 00:11:15.944 "name": "nvmf_tgt_poll_group_3", 00:11:15.944 "admin_qpairs": 0, 00:11:15.944 "io_qpairs": 0, 00:11:15.944 "current_admin_qpairs": 0, 00:11:15.944 "current_io_qpairs": 0, 00:11:15.944 "pending_bdev_io": 0, 00:11:15.944 "completed_nvme_io": 0, 00:11:15.944 "transports": [] 00:11:15.944 } 00:11:15.944 ] 00:11:15.944 }' 00:11:15.944 15:34:55 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:15.944 15:34:55 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:15.944 15:34:55 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:15.944 15:34:55 -- target/rpc.sh@15 -- # wc -l 00:11:15.944 15:34:55 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:15.944 15:34:55 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:15.944 15:34:55 -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:15.944 15:34:55 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:15.944 15:34:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:15.944 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:11:15.944 [2024-07-10 15:34:55.092247] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.944 15:34:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:15.944 15:34:55 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:15.944 15:34:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:15.944 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:11:15.944 15:34:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:15.944 15:34:55 -- target/rpc.sh@33 -- # stats='{ 00:11:15.944 "tick_rate": 2700000000, 00:11:15.944 "poll_groups": [ 00:11:15.944 { 00:11:15.944 "name": "nvmf_tgt_poll_group_0", 00:11:15.944 "admin_qpairs": 0, 00:11:15.944 "io_qpairs": 0, 00:11:15.944 "current_admin_qpairs": 0, 00:11:15.944 "current_io_qpairs": 0, 00:11:15.944 "pending_bdev_io": 0, 00:11:15.944 "completed_nvme_io": 0, 00:11:15.944 "transports": [ 00:11:15.944 { 00:11:15.944 "trtype": "TCP" 00:11:15.944 } 00:11:15.944 ] 00:11:15.944 }, 00:11:15.944 { 00:11:15.944 "name": "nvmf_tgt_poll_group_1", 00:11:15.944 "admin_qpairs": 0, 00:11:15.944 "io_qpairs": 0, 00:11:15.944 "current_admin_qpairs": 0, 00:11:15.944 "current_io_qpairs": 0, 00:11:15.944 "pending_bdev_io": 0, 00:11:15.944 "completed_nvme_io": 0, 00:11:15.944 "transports": [ 00:11:15.944 { 00:11:15.944 "trtype": "TCP" 00:11:15.944 } 00:11:15.944 ] 00:11:15.944 }, 00:11:15.944 { 00:11:15.944 "name": "nvmf_tgt_poll_group_2", 00:11:15.944 "admin_qpairs": 0, 00:11:15.944 "io_qpairs": 0, 00:11:15.944 "current_admin_qpairs": 0, 00:11:15.944 "current_io_qpairs": 0, 00:11:15.944 "pending_bdev_io": 0, 00:11:15.944 "completed_nvme_io": 0, 00:11:15.944 "transports": [ 00:11:15.944 { 00:11:15.944 "trtype": "TCP" 00:11:15.944 } 00:11:15.944 ] 00:11:15.944 }, 00:11:15.944 { 00:11:15.944 "name": "nvmf_tgt_poll_group_3", 00:11:15.944 "admin_qpairs": 0, 00:11:15.944 "io_qpairs": 0, 00:11:15.944 "current_admin_qpairs": 0, 00:11:15.944 "current_io_qpairs": 0, 00:11:15.944 "pending_bdev_io": 0, 00:11:15.944 "completed_nvme_io": 0, 00:11:15.944 "transports": [ 00:11:15.944 { 00:11:15.944 "trtype": "TCP" 00:11:15.944 } 00:11:15.944 ] 00:11:15.944 } 00:11:15.944 ] 00:11:15.944 }' 00:11:15.944 15:34:55 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:15.944 15:34:55 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:15.944 15:34:55 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:15.944 15:34:55 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:15.944 15:34:55 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:15.944 15:34:55 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:15.944 15:34:55 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:15.944 15:34:55 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:15.944 15:34:55 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:15.944 15:34:55 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:15.944 15:34:55 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:15.944 15:34:55 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:15.944 15:34:55 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:15.944 15:34:55 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:15.944 15:34:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:15.944 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:11:15.944 Malloc1 00:11:15.944 15:34:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:15.944 15:34:55 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:15.944 15:34:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:15.944 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:11:15.944 15:34:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:15.944 15:34:55 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:15.944 15:34:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:15.944 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:11:15.944 15:34:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:15.944 15:34:55 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:15.944 15:34:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:15.944 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:11:15.944 15:34:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:15.944 15:34:55 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.944 15:34:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:15.944 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:11:15.944 [2024-07-10 15:34:55.254062] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.944 15:34:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:15.944 15:34:55 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:15.944 15:34:55 -- common/autotest_common.sh@640 -- # local es=0 00:11:15.944 15:34:55 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:15.944 15:34:55 -- common/autotest_common.sh@628 -- # local arg=nvme 00:11:15.944 15:34:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:15.944 15:34:55 -- common/autotest_common.sh@632 -- # type -t nvme 00:11:15.944 15:34:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:15.944 15:34:55 -- common/autotest_common.sh@634 -- # type -P nvme 00:11:15.944 15:34:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:15.944 15:34:55 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:11:15.944 15:34:55 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:11:15.944 15:34:55 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:11:15.944 [2024-07-10 15:34:55.276608] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:15.945 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:15.945 could not add new controller: failed to write to nvme-fabrics device 00:11:15.945 15:34:55 -- common/autotest_common.sh@643 -- # es=1 00:11:15.945 15:34:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:15.945 15:34:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:15.945 15:34:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:15.945 15:34:55 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:15.945 15:34:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:15.945 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:11:15.945 15:34:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:15.945 15:34:55 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:16.878 15:34:55 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:16.878 15:34:55 -- common/autotest_common.sh@1177 -- # local i=0 00:11:16.878 15:34:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:16.878 15:34:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:16.878 15:34:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:18.770 15:34:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:18.770 15:34:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:18.770 15:34:57 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:18.770 15:34:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:18.770 15:34:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:18.770 15:34:57 -- common/autotest_common.sh@1187 -- # return 0 00:11:18.770 15:34:57 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:18.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.770 15:34:57 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:18.770 15:34:57 -- common/autotest_common.sh@1198 -- # local i=0 00:11:18.770 15:34:57 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:18.770 15:34:57 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.770 15:34:57 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:18.770 15:34:57 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.770 15:34:57 -- common/autotest_common.sh@1210 -- # return 0 00:11:18.770 15:34:57 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:18.770 15:34:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:18.770 15:34:57 -- common/autotest_common.sh@10 -- # set +x 00:11:18.770 15:34:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:18.770 15:34:57 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:18.770 15:34:57 -- common/autotest_common.sh@640 -- # local es=0 00:11:18.770 15:34:57 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:18.770 15:34:57 -- common/autotest_common.sh@628 -- # local arg=nvme 00:11:18.770 15:34:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:18.770 15:34:58 -- common/autotest_common.sh@632 -- # type -t nvme 00:11:18.770 15:34:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:18.770 15:34:58 -- common/autotest_common.sh@634 -- # type -P nvme 00:11:18.770 15:34:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:18.770 15:34:58 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:11:18.770 15:34:58 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:11:18.770 15:34:58 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:18.770 [2024-07-10 15:34:58.016907] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:11:18.770 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:18.770 could not add new controller: failed to write to nvme-fabrics device 00:11:18.770 15:34:58 -- common/autotest_common.sh@643 -- # es=1 00:11:18.770 15:34:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:18.770 15:34:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:18.770 15:34:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:18.770 15:34:58 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:18.770 15:34:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:18.770 15:34:58 -- common/autotest_common.sh@10 -- # set +x 00:11:18.770 15:34:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:18.770 15:34:58 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:19.333 15:34:58 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:19.333 15:34:58 -- common/autotest_common.sh@1177 -- # local i=0 00:11:19.333 15:34:58 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:19.333 15:34:58 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:19.333 15:34:58 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:21.859 15:35:00 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:21.859 15:35:00 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:21.860 15:35:00 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:21.860 15:35:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:21.860 15:35:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:21.860 15:35:00 -- common/autotest_common.sh@1187 -- # return 0 00:11:21.860 15:35:00 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:21.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.860 15:35:00 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:21.860 15:35:00 -- common/autotest_common.sh@1198 -- # local i=0 00:11:21.860 15:35:00 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:21.860 15:35:00 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.860 15:35:00 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:21.860 15:35:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:21.860 15:35:00 -- common/autotest_common.sh@1210 -- # return 0 00:11:21.860 15:35:00 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:21.860 15:35:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:21.860 15:35:00 -- common/autotest_common.sh@10 -- # set +x 00:11:21.860 15:35:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:21.860 15:35:00 -- target/rpc.sh@81 -- # seq 1 5 00:11:21.860 15:35:00 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:21.860 15:35:00 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:21.860 15:35:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:21.860 15:35:00 -- common/autotest_common.sh@10 -- # set +x 00:11:21.860 15:35:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:21.860 15:35:00 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.860 15:35:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:21.860 15:35:00 -- common/autotest_common.sh@10 -- # set +x 00:11:21.860 [2024-07-10 15:35:00.816237] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.860 15:35:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:21.860 15:35:00 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:21.860 15:35:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:21.860 15:35:00 -- common/autotest_common.sh@10 -- # set +x 00:11:21.860 15:35:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:21.860 15:35:00 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:21.860 15:35:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:21.860 15:35:00 -- common/autotest_common.sh@10 -- # set +x 00:11:21.860 15:35:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:21.860 15:35:00 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:22.426 15:35:01 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:22.426 15:35:01 -- common/autotest_common.sh@1177 -- # local i=0 00:11:22.426 15:35:01 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:22.426 15:35:01 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:22.426 15:35:01 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:24.325 15:35:03 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:24.325 15:35:03 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:24.325 15:35:03 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:24.325 15:35:03 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:24.325 15:35:03 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:24.325 15:35:03 -- common/autotest_common.sh@1187 -- # return 0 00:11:24.325 15:35:03 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:24.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.325 15:35:03 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:24.325 15:35:03 -- common/autotest_common.sh@1198 -- # local i=0 00:11:24.325 15:35:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:24.325 15:35:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:24.325 15:35:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:24.325 15:35:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:24.325 15:35:03 -- common/autotest_common.sh@1210 -- # return 0 00:11:24.325 15:35:03 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:24.325 15:35:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:24.325 15:35:03 -- common/autotest_common.sh@10 -- # set +x 00:11:24.325 15:35:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:24.325 15:35:03 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:24.325 15:35:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:24.325 15:35:03 -- common/autotest_common.sh@10 -- # set +x 00:11:24.325 15:35:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:24.325 15:35:03 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:24.325 15:35:03 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:24.325 15:35:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:24.325 15:35:03 -- common/autotest_common.sh@10 -- # set +x 00:11:24.325 15:35:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:24.325 15:35:03 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:24.325 15:35:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:24.325 15:35:03 -- common/autotest_common.sh@10 -- # set +x 00:11:24.325 [2024-07-10 15:35:03.676598] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:24.325 15:35:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:24.325 15:35:03 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:24.325 15:35:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:24.325 15:35:03 -- common/autotest_common.sh@10 -- # set +x 00:11:24.325 15:35:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:24.325 15:35:03 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:24.325 15:35:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:24.325 15:35:03 -- common/autotest_common.sh@10 -- # set +x 00:11:24.325 15:35:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:24.325 15:35:03 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:25.259 15:35:04 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:25.259 15:35:04 -- common/autotest_common.sh@1177 -- # local i=0 00:11:25.259 15:35:04 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:25.259 15:35:04 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:25.259 15:35:04 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:27.157 15:35:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:27.157 15:35:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:27.157 15:35:06 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:27.157 15:35:06 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:27.157 15:35:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:27.157 15:35:06 -- common/autotest_common.sh@1187 -- # return 0 00:11:27.157 15:35:06 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:27.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.157 15:35:06 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:27.157 15:35:06 -- common/autotest_common.sh@1198 -- # local i=0 00:11:27.157 15:35:06 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:27.157 15:35:06 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.157 15:35:06 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:27.157 15:35:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.157 15:35:06 -- common/autotest_common.sh@1210 -- # return 0 00:11:27.157 15:35:06 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:27.157 15:35:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:27.157 15:35:06 -- common/autotest_common.sh@10 -- # set +x 00:11:27.157 15:35:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:27.157 15:35:06 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:27.157 15:35:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:27.157 15:35:06 -- common/autotest_common.sh@10 -- # set +x 00:11:27.157 15:35:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:27.157 15:35:06 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:27.157 15:35:06 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:27.157 15:35:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:27.157 15:35:06 -- common/autotest_common.sh@10 -- # set +x 00:11:27.414 15:35:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:27.414 15:35:06 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:27.414 15:35:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:27.414 15:35:06 -- common/autotest_common.sh@10 -- # set +x 00:11:27.414 [2024-07-10 15:35:06.539574] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:27.414 15:35:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:27.414 15:35:06 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:27.414 15:35:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:27.414 15:35:06 -- common/autotest_common.sh@10 -- # set +x 00:11:27.414 15:35:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:27.414 15:35:06 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:27.414 15:35:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:27.414 15:35:06 -- common/autotest_common.sh@10 -- # set +x 00:11:27.414 15:35:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:27.414 15:35:06 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:27.980 15:35:07 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:27.980 15:35:07 -- common/autotest_common.sh@1177 -- # local i=0 00:11:27.980 15:35:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:27.980 15:35:07 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:27.980 15:35:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:29.877 15:35:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:29.877 15:35:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:29.877 15:35:09 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:29.877 15:35:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:29.877 15:35:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:29.877 15:35:09 -- common/autotest_common.sh@1187 -- # return 0 00:11:29.877 15:35:09 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:30.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.135 15:35:09 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:30.135 15:35:09 -- common/autotest_common.sh@1198 -- # local i=0 00:11:30.135 15:35:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:30.135 15:35:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.135 15:35:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:30.135 15:35:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.135 15:35:09 -- common/autotest_common.sh@1210 -- # return 0 00:11:30.135 15:35:09 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:30.135 15:35:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.136 15:35:09 -- common/autotest_common.sh@10 -- # set +x 00:11:30.136 15:35:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.136 15:35:09 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.136 15:35:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.136 15:35:09 -- common/autotest_common.sh@10 -- # set +x 00:11:30.136 15:35:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.136 15:35:09 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:30.136 15:35:09 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:30.136 15:35:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.136 15:35:09 -- common/autotest_common.sh@10 -- # set +x 00:11:30.136 15:35:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.136 15:35:09 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:30.136 15:35:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.136 15:35:09 -- common/autotest_common.sh@10 -- # set +x 00:11:30.136 [2024-07-10 15:35:09.347790] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:30.136 15:35:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.136 15:35:09 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:30.136 15:35:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.136 15:35:09 -- common/autotest_common.sh@10 -- # set +x 00:11:30.136 15:35:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.136 15:35:09 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:30.136 15:35:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:30.136 15:35:09 -- common/autotest_common.sh@10 -- # set +x 00:11:30.136 15:35:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:30.136 15:35:09 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:30.702 15:35:09 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:30.702 15:35:09 -- common/autotest_common.sh@1177 -- # local i=0 00:11:30.702 15:35:09 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:30.702 15:35:09 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:30.702 15:35:09 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:32.601 15:35:11 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:32.601 15:35:11 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:32.601 15:35:11 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:32.601 15:35:11 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:32.601 15:35:11 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:32.601 15:35:11 -- common/autotest_common.sh@1187 -- # return 0 00:11:32.601 15:35:11 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:32.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.860 15:35:12 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:32.860 15:35:12 -- common/autotest_common.sh@1198 -- # local i=0 00:11:32.860 15:35:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:32.860 15:35:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.860 15:35:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:32.860 15:35:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:32.860 15:35:12 -- common/autotest_common.sh@1210 -- # return 0 00:11:32.860 15:35:12 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:32.860 15:35:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:32.860 15:35:12 -- common/autotest_common.sh@10 -- # set +x 00:11:32.860 15:35:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:32.860 15:35:12 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:32.860 15:35:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:32.860 15:35:12 -- common/autotest_common.sh@10 -- # set +x 00:11:32.860 15:35:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:32.860 15:35:12 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:32.860 15:35:12 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:32.860 15:35:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:32.860 15:35:12 -- common/autotest_common.sh@10 -- # set +x 00:11:32.860 15:35:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:32.860 15:35:12 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:32.860 15:35:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:32.860 15:35:12 -- common/autotest_common.sh@10 -- # set +x 00:11:32.860 [2024-07-10 15:35:12.069698] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:32.860 15:35:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:32.860 15:35:12 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:32.860 15:35:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:32.860 15:35:12 -- common/autotest_common.sh@10 -- # set +x 00:11:32.860 15:35:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:32.860 15:35:12 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:32.860 15:35:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:32.860 15:35:12 -- common/autotest_common.sh@10 -- # set +x 00:11:32.860 15:35:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:32.860 15:35:12 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:33.425 15:35:12 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:33.425 15:35:12 -- common/autotest_common.sh@1177 -- # local i=0 00:11:33.425 15:35:12 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:11:33.425 15:35:12 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:11:33.425 15:35:12 -- common/autotest_common.sh@1184 -- # sleep 2 00:11:35.952 15:35:14 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:11:35.952 15:35:14 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:11:35.952 15:35:14 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:11:35.952 15:35:14 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:11:35.952 15:35:14 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:11:35.952 15:35:14 -- common/autotest_common.sh@1187 -- # return 0 00:11:35.952 15:35:14 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:35.952 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.952 15:35:14 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:35.952 15:35:14 -- common/autotest_common.sh@1198 -- # local i=0 00:11:35.952 15:35:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:35.952 15:35:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.952 15:35:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:35.952 15:35:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:35.952 15:35:14 -- common/autotest_common.sh@1210 -- # return 0 00:11:35.952 15:35:14 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:35.952 15:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.952 15:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.952 15:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.952 15:35:14 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.953 15:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:14 -- target/rpc.sh@99 -- # seq 1 5 00:11:35.953 15:35:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:35.953 15:35:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:35.953 15:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.953 15:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 [2024-07-10 15:35:14.884853] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.953 15:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.953 15:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:35.953 15:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.953 15:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:14 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.953 15:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:35.953 15:35:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:35.953 15:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.953 15:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 [2024-07-10 15:35:14.932928] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.953 15:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.953 15:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:35.953 15:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.953 15:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:14 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.953 15:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:14 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:35.953 15:35:14 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:35.953 15:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:14 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.953 15:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 [2024-07-10 15:35:14.981089] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.953 15:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:14 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.953 15:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:14 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:35.953 15:35:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:14 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:14 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.953 15:35:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:15 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:15 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.953 15:35:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:15 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:15 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:35.953 15:35:15 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:35.953 15:35:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:15 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:15 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.953 15:35:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:15 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 [2024-07-10 15:35:15.029243] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.953 15:35:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:15 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.953 15:35:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:15 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:15 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:35.953 15:35:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:15 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:15 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.953 15:35:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:15 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:15 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.953 15:35:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:15 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:15 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:35.953 15:35:15 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:35.953 15:35:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:15 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:15 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.953 15:35:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:15 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 [2024-07-10 15:35:15.077402] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.953 15:35:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:15 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:35.953 15:35:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:15 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:15 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:35.953 15:35:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:15 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:15 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.953 15:35:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:15 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:15 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:35.953 15:35:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:15 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:15 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:35.953 15:35:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:35.953 15:35:15 -- common/autotest_common.sh@10 -- # set +x 00:11:35.953 15:35:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:35.953 15:35:15 -- target/rpc.sh@110 -- # stats='{ 00:11:35.953 "tick_rate": 2700000000, 00:11:35.953 "poll_groups": [ 00:11:35.953 { 00:11:35.953 "name": "nvmf_tgt_poll_group_0", 00:11:35.953 "admin_qpairs": 2, 00:11:35.953 "io_qpairs": 84, 00:11:35.953 "current_admin_qpairs": 0, 00:11:35.953 "current_io_qpairs": 0, 00:11:35.953 "pending_bdev_io": 0, 00:11:35.953 "completed_nvme_io": 134, 00:11:35.953 "transports": [ 00:11:35.953 { 00:11:35.953 "trtype": "TCP" 00:11:35.953 } 00:11:35.953 ] 00:11:35.953 }, 00:11:35.953 { 00:11:35.953 "name": "nvmf_tgt_poll_group_1", 00:11:35.953 "admin_qpairs": 2, 00:11:35.953 "io_qpairs": 84, 00:11:35.953 "current_admin_qpairs": 0, 00:11:35.953 "current_io_qpairs": 0, 00:11:35.953 "pending_bdev_io": 0, 00:11:35.953 "completed_nvme_io": 86, 00:11:35.953 "transports": [ 00:11:35.953 { 00:11:35.954 "trtype": "TCP" 00:11:35.954 } 00:11:35.954 ] 00:11:35.954 }, 00:11:35.954 { 00:11:35.954 "name": "nvmf_tgt_poll_group_2", 00:11:35.954 "admin_qpairs": 1, 00:11:35.954 "io_qpairs": 84, 00:11:35.954 "current_admin_qpairs": 0, 00:11:35.954 "current_io_qpairs": 0, 00:11:35.954 "pending_bdev_io": 0, 00:11:35.954 "completed_nvme_io": 183, 00:11:35.954 "transports": [ 00:11:35.954 { 00:11:35.954 "trtype": "TCP" 00:11:35.954 } 00:11:35.954 ] 00:11:35.954 }, 00:11:35.954 { 00:11:35.954 "name": "nvmf_tgt_poll_group_3", 00:11:35.954 "admin_qpairs": 2, 00:11:35.954 "io_qpairs": 84, 00:11:35.954 "current_admin_qpairs": 0, 00:11:35.954 "current_io_qpairs": 0, 00:11:35.954 "pending_bdev_io": 0, 00:11:35.954 "completed_nvme_io": 283, 00:11:35.954 "transports": [ 00:11:35.954 { 00:11:35.954 "trtype": "TCP" 00:11:35.954 } 00:11:35.954 ] 00:11:35.954 } 00:11:35.954 ] 00:11:35.954 }' 00:11:35.954 15:35:15 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:35.954 15:35:15 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:35.954 15:35:15 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:35.954 15:35:15 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:35.954 15:35:15 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:35.954 15:35:15 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:35.954 15:35:15 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:35.954 15:35:15 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:35.954 15:35:15 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:35.954 15:35:15 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:35.954 15:35:15 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:35.954 15:35:15 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:35.954 15:35:15 -- target/rpc.sh@123 -- # nvmftestfini 00:11:35.954 15:35:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:35.954 15:35:15 -- nvmf/common.sh@116 -- # sync 00:11:35.954 15:35:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:35.954 15:35:15 -- nvmf/common.sh@119 -- # set +e 00:11:35.954 15:35:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:35.954 15:35:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:35.954 rmmod nvme_tcp 00:11:35.954 rmmod nvme_fabrics 00:11:35.954 rmmod nvme_keyring 00:11:35.954 15:35:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:35.954 15:35:15 -- nvmf/common.sh@123 -- # set -e 00:11:35.954 15:35:15 -- nvmf/common.sh@124 -- # return 0 00:11:35.954 15:35:15 -- nvmf/common.sh@477 -- # '[' -n 2059862 ']' 00:11:35.954 15:35:15 -- nvmf/common.sh@478 -- # killprocess 2059862 00:11:35.954 15:35:15 -- common/autotest_common.sh@926 -- # '[' -z 2059862 ']' 00:11:35.954 15:35:15 -- common/autotest_common.sh@930 -- # kill -0 2059862 00:11:35.954 15:35:15 -- common/autotest_common.sh@931 -- # uname 00:11:35.954 15:35:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:35.954 15:35:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2059862 00:11:35.954 15:35:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:35.954 15:35:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:35.954 15:35:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2059862' 00:11:35.954 killing process with pid 2059862 00:11:35.954 15:35:15 -- common/autotest_common.sh@945 -- # kill 2059862 00:11:35.954 15:35:15 -- common/autotest_common.sh@950 -- # wait 2059862 00:11:36.520 15:35:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:36.520 15:35:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:36.520 15:35:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:36.520 15:35:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:36.520 15:35:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:36.520 15:35:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.520 15:35:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.520 15:35:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.423 15:35:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:38.423 00:11:38.423 real 0m25.741s 00:11:38.423 user 1m24.623s 00:11:38.423 sys 0m4.182s 00:11:38.423 15:35:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:38.423 15:35:17 -- common/autotest_common.sh@10 -- # set +x 00:11:38.423 ************************************ 00:11:38.423 END TEST nvmf_rpc 00:11:38.423 ************************************ 00:11:38.423 15:35:17 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:38.423 15:35:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:38.423 15:35:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:38.423 15:35:17 -- common/autotest_common.sh@10 -- # set +x 00:11:38.423 ************************************ 00:11:38.423 START TEST nvmf_invalid 00:11:38.423 ************************************ 00:11:38.423 15:35:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:38.423 * Looking for test storage... 00:11:38.423 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:38.423 15:35:17 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:38.423 15:35:17 -- nvmf/common.sh@7 -- # uname -s 00:11:38.423 15:35:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:38.423 15:35:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:38.423 15:35:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:38.423 15:35:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:38.423 15:35:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:38.423 15:35:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:38.423 15:35:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:38.423 15:35:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:38.423 15:35:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:38.423 15:35:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:38.423 15:35:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:38.423 15:35:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:38.423 15:35:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:38.423 15:35:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:38.423 15:35:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:38.423 15:35:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:38.423 15:35:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:38.423 15:35:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:38.423 15:35:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:38.423 15:35:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.423 15:35:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.423 15:35:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.423 15:35:17 -- paths/export.sh@5 -- # export PATH 00:11:38.423 15:35:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:38.423 15:35:17 -- nvmf/common.sh@46 -- # : 0 00:11:38.423 15:35:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:38.423 15:35:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:38.423 15:35:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:38.423 15:35:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:38.424 15:35:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:38.424 15:35:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:38.424 15:35:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:38.424 15:35:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:38.424 15:35:17 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:38.424 15:35:17 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:38.424 15:35:17 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:38.424 15:35:17 -- target/invalid.sh@14 -- # target=foobar 00:11:38.424 15:35:17 -- target/invalid.sh@16 -- # RANDOM=0 00:11:38.424 15:35:17 -- target/invalid.sh@34 -- # nvmftestinit 00:11:38.424 15:35:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:38.424 15:35:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:38.424 15:35:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:38.424 15:35:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:38.424 15:35:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:38.424 15:35:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:38.424 15:35:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:38.424 15:35:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:38.424 15:35:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:38.424 15:35:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:38.424 15:35:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:38.424 15:35:17 -- common/autotest_common.sh@10 -- # set +x 00:11:40.954 15:35:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:40.954 15:35:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:40.954 15:35:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:40.954 15:35:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:40.954 15:35:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:40.954 15:35:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:40.954 15:35:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:40.954 15:35:19 -- nvmf/common.sh@294 -- # net_devs=() 00:11:40.954 15:35:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:40.954 15:35:19 -- nvmf/common.sh@295 -- # e810=() 00:11:40.954 15:35:19 -- nvmf/common.sh@295 -- # local -ga e810 00:11:40.954 15:35:19 -- nvmf/common.sh@296 -- # x722=() 00:11:40.954 15:35:19 -- nvmf/common.sh@296 -- # local -ga x722 00:11:40.954 15:35:19 -- nvmf/common.sh@297 -- # mlx=() 00:11:40.954 15:35:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:40.954 15:35:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:40.954 15:35:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:40.954 15:35:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:40.954 15:35:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:40.954 15:35:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:40.954 15:35:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:40.954 15:35:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:40.954 15:35:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:40.954 15:35:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:40.954 15:35:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:40.954 15:35:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:40.954 15:35:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:40.954 15:35:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:40.954 15:35:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:40.954 15:35:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:40.954 15:35:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:40.954 15:35:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:40.954 15:35:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:40.954 15:35:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:40.954 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:40.954 15:35:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:40.954 15:35:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:40.954 15:35:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.954 15:35:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.954 15:35:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:40.954 15:35:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:40.954 15:35:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:40.954 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:40.954 15:35:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:40.954 15:35:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:40.954 15:35:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.954 15:35:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.954 15:35:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:40.954 15:35:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:40.954 15:35:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:40.954 15:35:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:40.954 15:35:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:40.954 15:35:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.954 15:35:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:40.954 15:35:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.954 15:35:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:40.954 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:40.954 15:35:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.954 15:35:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:40.954 15:35:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.954 15:35:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:40.954 15:35:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.954 15:35:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:40.954 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:40.954 15:35:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.954 15:35:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:40.954 15:35:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:40.954 15:35:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:40.954 15:35:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:40.954 15:35:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:40.954 15:35:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.954 15:35:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.954 15:35:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:40.954 15:35:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:40.954 15:35:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:40.954 15:35:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:40.954 15:35:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:40.954 15:35:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:40.954 15:35:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.954 15:35:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:40.954 15:35:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:40.955 15:35:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:40.955 15:35:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:40.955 15:35:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:40.955 15:35:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:40.955 15:35:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:40.955 15:35:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:40.955 15:35:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:40.955 15:35:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:40.955 15:35:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:40.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:11:40.955 00:11:40.955 --- 10.0.0.2 ping statistics --- 00:11:40.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.955 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:11:40.955 15:35:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:40.955 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.955 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:11:40.955 00:11:40.955 --- 10.0.0.1 ping statistics --- 00:11:40.955 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.955 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:11:40.955 15:35:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.955 15:35:19 -- nvmf/common.sh@410 -- # return 0 00:11:40.955 15:35:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:40.955 15:35:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.955 15:35:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:40.955 15:35:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:40.955 15:35:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.955 15:35:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:40.955 15:35:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:40.955 15:35:19 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:40.955 15:35:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:40.955 15:35:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:40.955 15:35:19 -- common/autotest_common.sh@10 -- # set +x 00:11:40.955 15:35:19 -- nvmf/common.sh@469 -- # nvmfpid=2064449 00:11:40.955 15:35:19 -- nvmf/common.sh@470 -- # waitforlisten 2064449 00:11:40.955 15:35:19 -- common/autotest_common.sh@819 -- # '[' -z 2064449 ']' 00:11:40.955 15:35:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.955 15:35:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:40.955 15:35:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:40.955 15:35:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.955 15:35:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:40.955 15:35:19 -- common/autotest_common.sh@10 -- # set +x 00:11:40.955 [2024-07-10 15:35:19.918942] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:40.955 [2024-07-10 15:35:19.919040] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.955 EAL: No free 2048 kB hugepages reported on node 1 00:11:40.955 [2024-07-10 15:35:19.988026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.955 [2024-07-10 15:35:20.111779] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:40.955 [2024-07-10 15:35:20.111940] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.955 [2024-07-10 15:35:20.111960] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.955 [2024-07-10 15:35:20.111975] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.955 [2024-07-10 15:35:20.112033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.955 [2024-07-10 15:35:20.112061] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.955 [2024-07-10 15:35:20.112118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.955 [2024-07-10 15:35:20.112121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.520 15:35:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:41.520 15:35:20 -- common/autotest_common.sh@852 -- # return 0 00:11:41.520 15:35:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:41.520 15:35:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:41.520 15:35:20 -- common/autotest_common.sh@10 -- # set +x 00:11:41.520 15:35:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.520 15:35:20 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:41.520 15:35:20 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28834 00:11:41.777 [2024-07-10 15:35:21.132597] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:41.777 15:35:21 -- target/invalid.sh@40 -- # out='request: 00:11:41.777 { 00:11:41.777 "nqn": "nqn.2016-06.io.spdk:cnode28834", 00:11:41.777 "tgt_name": "foobar", 00:11:41.777 "method": "nvmf_create_subsystem", 00:11:41.777 "req_id": 1 00:11:41.777 } 00:11:41.777 Got JSON-RPC error response 00:11:41.778 response: 00:11:41.778 { 00:11:41.778 "code": -32603, 00:11:41.778 "message": "Unable to find target foobar" 00:11:41.778 }' 00:11:41.778 15:35:21 -- target/invalid.sh@41 -- # [[ request: 00:11:41.778 { 00:11:41.778 "nqn": "nqn.2016-06.io.spdk:cnode28834", 00:11:41.778 "tgt_name": "foobar", 00:11:41.778 "method": "nvmf_create_subsystem", 00:11:41.778 "req_id": 1 00:11:41.778 } 00:11:41.778 Got JSON-RPC error response 00:11:41.778 response: 00:11:41.778 { 00:11:41.778 "code": -32603, 00:11:41.778 "message": "Unable to find target foobar" 00:11:41.778 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:42.035 15:35:21 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:42.035 15:35:21 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode12888 00:11:42.293 [2024-07-10 15:35:21.421597] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12888: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:42.293 15:35:21 -- target/invalid.sh@45 -- # out='request: 00:11:42.293 { 00:11:42.293 "nqn": "nqn.2016-06.io.spdk:cnode12888", 00:11:42.293 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:42.293 "method": "nvmf_create_subsystem", 00:11:42.293 "req_id": 1 00:11:42.293 } 00:11:42.293 Got JSON-RPC error response 00:11:42.293 response: 00:11:42.293 { 00:11:42.293 "code": -32602, 00:11:42.293 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:42.293 }' 00:11:42.293 15:35:21 -- target/invalid.sh@46 -- # [[ request: 00:11:42.293 { 00:11:42.293 "nqn": "nqn.2016-06.io.spdk:cnode12888", 00:11:42.293 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:42.293 "method": "nvmf_create_subsystem", 00:11:42.293 "req_id": 1 00:11:42.293 } 00:11:42.293 Got JSON-RPC error response 00:11:42.293 response: 00:11:42.293 { 00:11:42.293 "code": -32602, 00:11:42.293 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:42.293 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:42.293 15:35:21 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:42.293 15:35:21 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16592 00:11:42.293 [2024-07-10 15:35:21.666319] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16592: invalid model number 'SPDK_Controller' 00:11:42.552 15:35:21 -- target/invalid.sh@50 -- # out='request: 00:11:42.552 { 00:11:42.552 "nqn": "nqn.2016-06.io.spdk:cnode16592", 00:11:42.552 "model_number": "SPDK_Controller\u001f", 00:11:42.552 "method": "nvmf_create_subsystem", 00:11:42.552 "req_id": 1 00:11:42.552 } 00:11:42.552 Got JSON-RPC error response 00:11:42.552 response: 00:11:42.552 { 00:11:42.552 "code": -32602, 00:11:42.552 "message": "Invalid MN SPDK_Controller\u001f" 00:11:42.552 }' 00:11:42.552 15:35:21 -- target/invalid.sh@51 -- # [[ request: 00:11:42.552 { 00:11:42.552 "nqn": "nqn.2016-06.io.spdk:cnode16592", 00:11:42.552 "model_number": "SPDK_Controller\u001f", 00:11:42.552 "method": "nvmf_create_subsystem", 00:11:42.552 "req_id": 1 00:11:42.552 } 00:11:42.552 Got JSON-RPC error response 00:11:42.552 response: 00:11:42.552 { 00:11:42.552 "code": -32602, 00:11:42.552 "message": "Invalid MN SPDK_Controller\u001f" 00:11:42.552 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:42.552 15:35:21 -- target/invalid.sh@54 -- # gen_random_s 21 00:11:42.552 15:35:21 -- target/invalid.sh@19 -- # local length=21 ll 00:11:42.552 15:35:21 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:42.552 15:35:21 -- target/invalid.sh@21 -- # local chars 00:11:42.552 15:35:21 -- target/invalid.sh@22 -- # local string 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 56 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+=8 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 87 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+=W 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 127 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+=$'\177' 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 53 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+=5 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 117 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+=u 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 58 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+=: 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 122 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+=z 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 40 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+='(' 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 67 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+=C 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 56 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+=8 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 97 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+=a 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 123 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+='{' 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 36 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+='$' 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 92 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+='\' 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 91 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+='[' 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 71 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+=G 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 47 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+=/ 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 109 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+=m 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 109 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+=m 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 124 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+='|' 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # printf %x 53 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:42.552 15:35:21 -- target/invalid.sh@25 -- # string+=5 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.552 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.552 15:35:21 -- target/invalid.sh@28 -- # [[ 8 == \- ]] 00:11:42.552 15:35:21 -- target/invalid.sh@31 -- # echo '8W5u:z(C8a{$\[G/mm|5' 00:11:42.552 15:35:21 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '8W5u:z(C8a{$\[G/mm|5' nqn.2016-06.io.spdk:cnode12314 00:11:42.810 [2024-07-10 15:35:21.971356] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12314: invalid serial number '8W5u:z(C8a{$\[G/mm|5' 00:11:42.810 15:35:21 -- target/invalid.sh@54 -- # out='request: 00:11:42.810 { 00:11:42.810 "nqn": "nqn.2016-06.io.spdk:cnode12314", 00:11:42.810 "serial_number": "8W\u007f5u:z(C8a{$\\[G/mm|5", 00:11:42.810 "method": "nvmf_create_subsystem", 00:11:42.810 "req_id": 1 00:11:42.810 } 00:11:42.810 Got JSON-RPC error response 00:11:42.810 response: 00:11:42.810 { 00:11:42.810 "code": -32602, 00:11:42.810 "message": "Invalid SN 8W\u007f5u:z(C8a{$\\[G/mm|5" 00:11:42.810 }' 00:11:42.810 15:35:21 -- target/invalid.sh@55 -- # [[ request: 00:11:42.810 { 00:11:42.810 "nqn": "nqn.2016-06.io.spdk:cnode12314", 00:11:42.810 "serial_number": "8W\u007f5u:z(C8a{$\\[G/mm|5", 00:11:42.810 "method": "nvmf_create_subsystem", 00:11:42.810 "req_id": 1 00:11:42.810 } 00:11:42.810 Got JSON-RPC error response 00:11:42.810 response: 00:11:42.810 { 00:11:42.810 "code": -32602, 00:11:42.810 "message": "Invalid SN 8W\u007f5u:z(C8a{$\\[G/mm|5" 00:11:42.810 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:42.810 15:35:21 -- target/invalid.sh@58 -- # gen_random_s 41 00:11:42.811 15:35:21 -- target/invalid.sh@19 -- # local length=41 ll 00:11:42.811 15:35:21 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:42.811 15:35:21 -- target/invalid.sh@21 -- # local chars 00:11:42.811 15:35:21 -- target/invalid.sh@22 -- # local string 00:11:42.811 15:35:21 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:42.811 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:21 -- target/invalid.sh@25 -- # printf %x 57 00:11:42.811 15:35:21 -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:42.811 15:35:21 -- target/invalid.sh@25 -- # string+=9 00:11:42.811 15:35:21 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:21 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 63 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+='?' 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 119 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=w 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 53 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=5 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 117 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=u 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 70 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=F 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 102 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=f 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 67 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=C 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 111 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=o 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 68 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=D 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 61 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+== 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 85 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x55' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=U 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 120 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=x 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 48 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x30' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=0 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 36 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+='$' 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 98 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=b 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 84 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=T 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 94 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+='^' 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 53 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=5 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 115 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=s 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 123 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+='{' 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 121 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=y 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 57 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=9 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 95 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=_ 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 52 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x34' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=4 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 86 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=V 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 112 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=p 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 57 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=9 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # printf %x 103 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x67' 00:11:42.811 15:35:22 -- target/invalid.sh@25 -- # string+=g 00:11:42.811 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # printf %x 80 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # string+=P 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # printf %x 86 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x56' 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # string+=V 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # printf %x 50 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # string+=2 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # printf %x 65 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # string+=A 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # printf %x 100 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # string+=d 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # printf %x 105 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # string+=i 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # printf %x 120 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # string+=x 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # printf %x 93 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # string+=']' 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # printf %x 94 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # string+='^' 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # printf %x 73 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x49' 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # string+=I 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # printf %x 64 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x40' 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # string+=@ 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # printf %x 124 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:11:42.812 15:35:22 -- target/invalid.sh@25 -- # string+='|' 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:11:42.812 15:35:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:11:42.812 15:35:22 -- target/invalid.sh@28 -- # [[ 9 == \- ]] 00:11:42.812 15:35:22 -- target/invalid.sh@31 -- # echo '9?w5uFfCoD=Ux0$bT^5s{y9_4Vp9gPV2Adix]^I@|' 00:11:42.812 15:35:22 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '9?w5uFfCoD=Ux0$bT^5s{y9_4Vp9gPV2Adix]^I@|' nqn.2016-06.io.spdk:cnode31712 00:11:43.070 [2024-07-10 15:35:22.360628] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31712: invalid model number '9?w5uFfCoD=Ux0$bT^5s{y9_4Vp9gPV2Adix]^I@|' 00:11:43.070 15:35:22 -- target/invalid.sh@58 -- # out='request: 00:11:43.070 { 00:11:43.070 "nqn": "nqn.2016-06.io.spdk:cnode31712", 00:11:43.070 "model_number": "9?w5uFfCoD=Ux0$bT^5s{y9_4Vp9gPV2Adix]^I@|", 00:11:43.070 "method": "nvmf_create_subsystem", 00:11:43.070 "req_id": 1 00:11:43.070 } 00:11:43.070 Got JSON-RPC error response 00:11:43.070 response: 00:11:43.070 { 00:11:43.070 "code": -32602, 00:11:43.070 "message": "Invalid MN 9?w5uFfCoD=Ux0$bT^5s{y9_4Vp9gPV2Adix]^I@|" 00:11:43.070 }' 00:11:43.070 15:35:22 -- target/invalid.sh@59 -- # [[ request: 00:11:43.070 { 00:11:43.070 "nqn": "nqn.2016-06.io.spdk:cnode31712", 00:11:43.070 "model_number": "9?w5uFfCoD=Ux0$bT^5s{y9_4Vp9gPV2Adix]^I@|", 00:11:43.070 "method": "nvmf_create_subsystem", 00:11:43.070 "req_id": 1 00:11:43.070 } 00:11:43.070 Got JSON-RPC error response 00:11:43.070 response: 00:11:43.070 { 00:11:43.070 "code": -32602, 00:11:43.070 "message": "Invalid MN 9?w5uFfCoD=Ux0$bT^5s{y9_4Vp9gPV2Adix]^I@|" 00:11:43.070 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:43.070 15:35:22 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:43.378 [2024-07-10 15:35:22.593464] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:43.378 15:35:22 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:43.668 15:35:22 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:43.668 15:35:22 -- target/invalid.sh@67 -- # echo '' 00:11:43.668 15:35:22 -- target/invalid.sh@67 -- # head -n 1 00:11:43.668 15:35:22 -- target/invalid.sh@67 -- # IP= 00:11:43.668 15:35:22 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:43.924 [2024-07-10 15:35:23.075093] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:43.924 15:35:23 -- target/invalid.sh@69 -- # out='request: 00:11:43.924 { 00:11:43.924 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:43.924 "listen_address": { 00:11:43.924 "trtype": "tcp", 00:11:43.924 "traddr": "", 00:11:43.924 "trsvcid": "4421" 00:11:43.924 }, 00:11:43.924 "method": "nvmf_subsystem_remove_listener", 00:11:43.924 "req_id": 1 00:11:43.924 } 00:11:43.924 Got JSON-RPC error response 00:11:43.924 response: 00:11:43.924 { 00:11:43.924 "code": -32602, 00:11:43.924 "message": "Invalid parameters" 00:11:43.924 }' 00:11:43.924 15:35:23 -- target/invalid.sh@70 -- # [[ request: 00:11:43.924 { 00:11:43.924 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:43.924 "listen_address": { 00:11:43.924 "trtype": "tcp", 00:11:43.924 "traddr": "", 00:11:43.924 "trsvcid": "4421" 00:11:43.924 }, 00:11:43.924 "method": "nvmf_subsystem_remove_listener", 00:11:43.924 "req_id": 1 00:11:43.924 } 00:11:43.924 Got JSON-RPC error response 00:11:43.924 response: 00:11:43.924 { 00:11:43.924 "code": -32602, 00:11:43.924 "message": "Invalid parameters" 00:11:43.924 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:43.924 15:35:23 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29425 -i 0 00:11:44.181 [2024-07-10 15:35:23.319912] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29425: invalid cntlid range [0-65519] 00:11:44.181 15:35:23 -- target/invalid.sh@73 -- # out='request: 00:11:44.181 { 00:11:44.181 "nqn": "nqn.2016-06.io.spdk:cnode29425", 00:11:44.181 "min_cntlid": 0, 00:11:44.181 "method": "nvmf_create_subsystem", 00:11:44.181 "req_id": 1 00:11:44.181 } 00:11:44.181 Got JSON-RPC error response 00:11:44.181 response: 00:11:44.181 { 00:11:44.181 "code": -32602, 00:11:44.181 "message": "Invalid cntlid range [0-65519]" 00:11:44.181 }' 00:11:44.181 15:35:23 -- target/invalid.sh@74 -- # [[ request: 00:11:44.181 { 00:11:44.181 "nqn": "nqn.2016-06.io.spdk:cnode29425", 00:11:44.181 "min_cntlid": 0, 00:11:44.181 "method": "nvmf_create_subsystem", 00:11:44.181 "req_id": 1 00:11:44.181 } 00:11:44.181 Got JSON-RPC error response 00:11:44.181 response: 00:11:44.181 { 00:11:44.181 "code": -32602, 00:11:44.181 "message": "Invalid cntlid range [0-65519]" 00:11:44.181 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:44.181 15:35:23 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13988 -i 65520 00:11:44.181 [2024-07-10 15:35:23.552730] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13988: invalid cntlid range [65520-65519] 00:11:44.439 15:35:23 -- target/invalid.sh@75 -- # out='request: 00:11:44.439 { 00:11:44.439 "nqn": "nqn.2016-06.io.spdk:cnode13988", 00:11:44.439 "min_cntlid": 65520, 00:11:44.439 "method": "nvmf_create_subsystem", 00:11:44.439 "req_id": 1 00:11:44.439 } 00:11:44.439 Got JSON-RPC error response 00:11:44.439 response: 00:11:44.439 { 00:11:44.439 "code": -32602, 00:11:44.439 "message": "Invalid cntlid range [65520-65519]" 00:11:44.439 }' 00:11:44.439 15:35:23 -- target/invalid.sh@76 -- # [[ request: 00:11:44.439 { 00:11:44.439 "nqn": "nqn.2016-06.io.spdk:cnode13988", 00:11:44.439 "min_cntlid": 65520, 00:11:44.439 "method": "nvmf_create_subsystem", 00:11:44.439 "req_id": 1 00:11:44.439 } 00:11:44.439 Got JSON-RPC error response 00:11:44.439 response: 00:11:44.439 { 00:11:44.439 "code": -32602, 00:11:44.439 "message": "Invalid cntlid range [65520-65519]" 00:11:44.439 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:44.439 15:35:23 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24565 -I 0 00:11:44.439 [2024-07-10 15:35:23.789554] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24565: invalid cntlid range [1-0] 00:11:44.439 15:35:23 -- target/invalid.sh@77 -- # out='request: 00:11:44.439 { 00:11:44.439 "nqn": "nqn.2016-06.io.spdk:cnode24565", 00:11:44.439 "max_cntlid": 0, 00:11:44.439 "method": "nvmf_create_subsystem", 00:11:44.439 "req_id": 1 00:11:44.439 } 00:11:44.439 Got JSON-RPC error response 00:11:44.439 response: 00:11:44.439 { 00:11:44.439 "code": -32602, 00:11:44.439 "message": "Invalid cntlid range [1-0]" 00:11:44.439 }' 00:11:44.439 15:35:23 -- target/invalid.sh@78 -- # [[ request: 00:11:44.439 { 00:11:44.439 "nqn": "nqn.2016-06.io.spdk:cnode24565", 00:11:44.439 "max_cntlid": 0, 00:11:44.439 "method": "nvmf_create_subsystem", 00:11:44.439 "req_id": 1 00:11:44.439 } 00:11:44.439 Got JSON-RPC error response 00:11:44.439 response: 00:11:44.439 { 00:11:44.439 "code": -32602, 00:11:44.439 "message": "Invalid cntlid range [1-0]" 00:11:44.439 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:44.439 15:35:23 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17235 -I 65520 00:11:44.697 [2024-07-10 15:35:24.030363] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17235: invalid cntlid range [1-65520] 00:11:44.697 15:35:24 -- target/invalid.sh@79 -- # out='request: 00:11:44.697 { 00:11:44.697 "nqn": "nqn.2016-06.io.spdk:cnode17235", 00:11:44.697 "max_cntlid": 65520, 00:11:44.697 "method": "nvmf_create_subsystem", 00:11:44.697 "req_id": 1 00:11:44.697 } 00:11:44.697 Got JSON-RPC error response 00:11:44.697 response: 00:11:44.697 { 00:11:44.697 "code": -32602, 00:11:44.697 "message": "Invalid cntlid range [1-65520]" 00:11:44.697 }' 00:11:44.697 15:35:24 -- target/invalid.sh@80 -- # [[ request: 00:11:44.697 { 00:11:44.697 "nqn": "nqn.2016-06.io.spdk:cnode17235", 00:11:44.697 "max_cntlid": 65520, 00:11:44.697 "method": "nvmf_create_subsystem", 00:11:44.697 "req_id": 1 00:11:44.697 } 00:11:44.697 Got JSON-RPC error response 00:11:44.697 response: 00:11:44.697 { 00:11:44.697 "code": -32602, 00:11:44.697 "message": "Invalid cntlid range [1-65520]" 00:11:44.697 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:44.697 15:35:24 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11235 -i 6 -I 5 00:11:44.956 [2024-07-10 15:35:24.279249] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11235: invalid cntlid range [6-5] 00:11:44.956 15:35:24 -- target/invalid.sh@83 -- # out='request: 00:11:44.956 { 00:11:44.956 "nqn": "nqn.2016-06.io.spdk:cnode11235", 00:11:44.956 "min_cntlid": 6, 00:11:44.956 "max_cntlid": 5, 00:11:44.956 "method": "nvmf_create_subsystem", 00:11:44.956 "req_id": 1 00:11:44.956 } 00:11:44.956 Got JSON-RPC error response 00:11:44.956 response: 00:11:44.956 { 00:11:44.956 "code": -32602, 00:11:44.956 "message": "Invalid cntlid range [6-5]" 00:11:44.956 }' 00:11:44.956 15:35:24 -- target/invalid.sh@84 -- # [[ request: 00:11:44.956 { 00:11:44.956 "nqn": "nqn.2016-06.io.spdk:cnode11235", 00:11:44.956 "min_cntlid": 6, 00:11:44.956 "max_cntlid": 5, 00:11:44.956 "method": "nvmf_create_subsystem", 00:11:44.956 "req_id": 1 00:11:44.956 } 00:11:44.956 Got JSON-RPC error response 00:11:44.956 response: 00:11:44.956 { 00:11:44.956 "code": -32602, 00:11:44.956 "message": "Invalid cntlid range [6-5]" 00:11:44.956 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:44.956 15:35:24 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:45.216 15:35:24 -- target/invalid.sh@87 -- # out='request: 00:11:45.216 { 00:11:45.216 "name": "foobar", 00:11:45.216 "method": "nvmf_delete_target", 00:11:45.216 "req_id": 1 00:11:45.216 } 00:11:45.216 Got JSON-RPC error response 00:11:45.216 response: 00:11:45.216 { 00:11:45.216 "code": -32602, 00:11:45.216 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:45.216 }' 00:11:45.216 15:35:24 -- target/invalid.sh@88 -- # [[ request: 00:11:45.216 { 00:11:45.216 "name": "foobar", 00:11:45.216 "method": "nvmf_delete_target", 00:11:45.216 "req_id": 1 00:11:45.216 } 00:11:45.216 Got JSON-RPC error response 00:11:45.216 response: 00:11:45.216 { 00:11:45.216 "code": -32602, 00:11:45.216 "message": "The specified target doesn't exist, cannot delete it." 00:11:45.216 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:45.216 15:35:24 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:45.216 15:35:24 -- target/invalid.sh@91 -- # nvmftestfini 00:11:45.216 15:35:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:45.216 15:35:24 -- nvmf/common.sh@116 -- # sync 00:11:45.216 15:35:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:45.216 15:35:24 -- nvmf/common.sh@119 -- # set +e 00:11:45.216 15:35:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:45.216 15:35:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:45.216 rmmod nvme_tcp 00:11:45.216 rmmod nvme_fabrics 00:11:45.216 rmmod nvme_keyring 00:11:45.216 15:35:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:45.216 15:35:24 -- nvmf/common.sh@123 -- # set -e 00:11:45.216 15:35:24 -- nvmf/common.sh@124 -- # return 0 00:11:45.216 15:35:24 -- nvmf/common.sh@477 -- # '[' -n 2064449 ']' 00:11:45.216 15:35:24 -- nvmf/common.sh@478 -- # killprocess 2064449 00:11:45.216 15:35:24 -- common/autotest_common.sh@926 -- # '[' -z 2064449 ']' 00:11:45.216 15:35:24 -- common/autotest_common.sh@930 -- # kill -0 2064449 00:11:45.216 15:35:24 -- common/autotest_common.sh@931 -- # uname 00:11:45.216 15:35:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:45.216 15:35:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2064449 00:11:45.216 15:35:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:45.216 15:35:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:45.216 15:35:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2064449' 00:11:45.216 killing process with pid 2064449 00:11:45.216 15:35:24 -- common/autotest_common.sh@945 -- # kill 2064449 00:11:45.216 15:35:24 -- common/autotest_common.sh@950 -- # wait 2064449 00:11:45.476 15:35:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:45.476 15:35:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:45.476 15:35:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:45.476 15:35:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:45.476 15:35:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:45.476 15:35:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.476 15:35:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.476 15:35:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.019 15:35:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:48.019 00:11:48.019 real 0m9.143s 00:11:48.019 user 0m22.127s 00:11:48.019 sys 0m2.467s 00:11:48.019 15:35:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:48.019 15:35:26 -- common/autotest_common.sh@10 -- # set +x 00:11:48.019 ************************************ 00:11:48.019 END TEST nvmf_invalid 00:11:48.019 ************************************ 00:11:48.019 15:35:26 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:48.019 15:35:26 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:48.019 15:35:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:48.019 15:35:26 -- common/autotest_common.sh@10 -- # set +x 00:11:48.019 ************************************ 00:11:48.019 START TEST nvmf_abort 00:11:48.019 ************************************ 00:11:48.019 15:35:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:11:48.019 * Looking for test storage... 00:11:48.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:48.019 15:35:26 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:48.019 15:35:26 -- nvmf/common.sh@7 -- # uname -s 00:11:48.019 15:35:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:48.019 15:35:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:48.019 15:35:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:48.019 15:35:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:48.019 15:35:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:48.019 15:35:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:48.019 15:35:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:48.019 15:35:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:48.020 15:35:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:48.020 15:35:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:48.020 15:35:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:48.020 15:35:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:48.020 15:35:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:48.020 15:35:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:48.020 15:35:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:48.020 15:35:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:48.020 15:35:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:48.020 15:35:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:48.020 15:35:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:48.020 15:35:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.020 15:35:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.020 15:35:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.020 15:35:26 -- paths/export.sh@5 -- # export PATH 00:11:48.020 15:35:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:48.020 15:35:26 -- nvmf/common.sh@46 -- # : 0 00:11:48.020 15:35:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:48.020 15:35:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:48.020 15:35:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:48.020 15:35:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:48.020 15:35:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:48.020 15:35:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:48.020 15:35:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:48.020 15:35:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:48.020 15:35:26 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:48.020 15:35:26 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:11:48.020 15:35:26 -- target/abort.sh@14 -- # nvmftestinit 00:11:48.020 15:35:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:48.020 15:35:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:48.020 15:35:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:48.020 15:35:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:48.020 15:35:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:48.020 15:35:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:48.020 15:35:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:48.020 15:35:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:48.020 15:35:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:48.020 15:35:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:48.020 15:35:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:48.020 15:35:26 -- common/autotest_common.sh@10 -- # set +x 00:11:49.927 15:35:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:49.927 15:35:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:49.927 15:35:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:49.927 15:35:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:49.927 15:35:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:49.927 15:35:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:49.927 15:35:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:49.927 15:35:28 -- nvmf/common.sh@294 -- # net_devs=() 00:11:49.927 15:35:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:49.927 15:35:28 -- nvmf/common.sh@295 -- # e810=() 00:11:49.927 15:35:28 -- nvmf/common.sh@295 -- # local -ga e810 00:11:49.927 15:35:28 -- nvmf/common.sh@296 -- # x722=() 00:11:49.927 15:35:28 -- nvmf/common.sh@296 -- # local -ga x722 00:11:49.927 15:35:28 -- nvmf/common.sh@297 -- # mlx=() 00:11:49.927 15:35:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:49.927 15:35:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:49.927 15:35:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:49.927 15:35:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:49.927 15:35:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:49.927 15:35:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:49.927 15:35:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:49.927 15:35:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:49.927 15:35:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:49.927 15:35:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:49.927 15:35:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:49.927 15:35:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:49.927 15:35:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:49.927 15:35:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:49.927 15:35:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:49.927 15:35:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:49.927 15:35:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:49.927 15:35:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:49.927 15:35:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:49.927 15:35:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:49.927 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:49.927 15:35:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:49.927 15:35:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:49.927 15:35:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.927 15:35:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.928 15:35:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:49.928 15:35:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:49.928 15:35:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:49.928 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:49.928 15:35:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:49.928 15:35:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:49.928 15:35:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:49.928 15:35:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:49.928 15:35:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:49.928 15:35:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:49.928 15:35:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:49.928 15:35:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:49.928 15:35:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:49.928 15:35:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.928 15:35:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:49.928 15:35:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.928 15:35:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:49.928 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:49.928 15:35:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.928 15:35:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:49.928 15:35:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:49.928 15:35:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:49.928 15:35:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:49.928 15:35:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:49.928 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:49.928 15:35:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:49.928 15:35:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:49.928 15:35:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:49.928 15:35:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:49.928 15:35:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:49.928 15:35:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:49.928 15:35:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:49.928 15:35:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:49.928 15:35:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:49.928 15:35:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:49.928 15:35:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:49.928 15:35:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:49.928 15:35:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:49.928 15:35:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:49.928 15:35:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:49.928 15:35:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:49.928 15:35:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:49.928 15:35:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:49.928 15:35:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:49.928 15:35:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:49.928 15:35:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:49.928 15:35:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:49.928 15:35:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:49.928 15:35:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:49.928 15:35:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:49.928 15:35:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:49.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:49.928 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:11:49.928 00:11:49.928 --- 10.0.0.2 ping statistics --- 00:11:49.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.928 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:11:49.928 15:35:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:49.928 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:49.928 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:11:49.928 00:11:49.928 --- 10.0.0.1 ping statistics --- 00:11:49.928 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:49.928 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:11:49.928 15:35:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:49.928 15:35:29 -- nvmf/common.sh@410 -- # return 0 00:11:49.928 15:35:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:49.928 15:35:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:49.928 15:35:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:49.928 15:35:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:49.928 15:35:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:49.928 15:35:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:49.928 15:35:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:49.928 15:35:29 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:11:49.928 15:35:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:49.928 15:35:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:49.928 15:35:29 -- common/autotest_common.sh@10 -- # set +x 00:11:49.928 15:35:29 -- nvmf/common.sh@469 -- # nvmfpid=2067245 00:11:49.928 15:35:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:49.928 15:35:29 -- nvmf/common.sh@470 -- # waitforlisten 2067245 00:11:49.928 15:35:29 -- common/autotest_common.sh@819 -- # '[' -z 2067245 ']' 00:11:49.928 15:35:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:49.928 15:35:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:49.928 15:35:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:49.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:49.928 15:35:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:49.928 15:35:29 -- common/autotest_common.sh@10 -- # set +x 00:11:49.928 [2024-07-10 15:35:29.133709] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:49.928 [2024-07-10 15:35:29.133796] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:49.928 EAL: No free 2048 kB hugepages reported on node 1 00:11:49.928 [2024-07-10 15:35:29.209572] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:50.187 [2024-07-10 15:35:29.335023] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:50.187 [2024-07-10 15:35:29.335184] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:50.187 [2024-07-10 15:35:29.335205] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:50.187 [2024-07-10 15:35:29.335220] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:50.187 [2024-07-10 15:35:29.335323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.187 [2024-07-10 15:35:29.336452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:50.187 [2024-07-10 15:35:29.336466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:51.123 15:35:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:51.123 15:35:30 -- common/autotest_common.sh@852 -- # return 0 00:11:51.123 15:35:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:51.123 15:35:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:51.123 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:11:51.123 15:35:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:51.123 15:35:30 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:11:51.123 15:35:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.123 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:11:51.123 [2024-07-10 15:35:30.155383] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:51.123 15:35:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.123 15:35:30 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:11:51.123 15:35:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.123 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:11:51.123 Malloc0 00:11:51.123 15:35:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.123 15:35:30 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:51.123 15:35:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.123 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:11:51.123 Delay0 00:11:51.123 15:35:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.123 15:35:30 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:51.123 15:35:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.123 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:11:51.123 15:35:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.123 15:35:30 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:11:51.123 15:35:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.123 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:11:51.123 15:35:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.123 15:35:30 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:51.123 15:35:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.123 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:11:51.123 [2024-07-10 15:35:30.226051] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:51.123 15:35:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.123 15:35:30 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:51.123 15:35:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:51.123 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:11:51.123 15:35:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:51.123 15:35:30 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:11:51.123 EAL: No free 2048 kB hugepages reported on node 1 00:11:51.123 [2024-07-10 15:35:30.291107] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:53.026 Initializing NVMe Controllers 00:11:53.026 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:11:53.026 controller IO queue size 128 less than required 00:11:53.026 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:11:53.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:11:53.026 Initialization complete. Launching workers. 00:11:53.026 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30936 00:11:53.026 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30997, failed to submit 62 00:11:53.026 success 30936, unsuccess 61, failed 0 00:11:53.026 15:35:32 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:53.026 15:35:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:53.026 15:35:32 -- common/autotest_common.sh@10 -- # set +x 00:11:53.026 15:35:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:53.026 15:35:32 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:11:53.026 15:35:32 -- target/abort.sh@38 -- # nvmftestfini 00:11:53.026 15:35:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:53.026 15:35:32 -- nvmf/common.sh@116 -- # sync 00:11:53.026 15:35:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:53.026 15:35:32 -- nvmf/common.sh@119 -- # set +e 00:11:53.026 15:35:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:53.026 15:35:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:53.026 rmmod nvme_tcp 00:11:53.285 rmmod nvme_fabrics 00:11:53.285 rmmod nvme_keyring 00:11:53.285 15:35:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:53.285 15:35:32 -- nvmf/common.sh@123 -- # set -e 00:11:53.285 15:35:32 -- nvmf/common.sh@124 -- # return 0 00:11:53.285 15:35:32 -- nvmf/common.sh@477 -- # '[' -n 2067245 ']' 00:11:53.285 15:35:32 -- nvmf/common.sh@478 -- # killprocess 2067245 00:11:53.285 15:35:32 -- common/autotest_common.sh@926 -- # '[' -z 2067245 ']' 00:11:53.285 15:35:32 -- common/autotest_common.sh@930 -- # kill -0 2067245 00:11:53.285 15:35:32 -- common/autotest_common.sh@931 -- # uname 00:11:53.285 15:35:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:53.285 15:35:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2067245 00:11:53.285 15:35:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:53.285 15:35:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:53.285 15:35:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2067245' 00:11:53.285 killing process with pid 2067245 00:11:53.285 15:35:32 -- common/autotest_common.sh@945 -- # kill 2067245 00:11:53.285 15:35:32 -- common/autotest_common.sh@950 -- # wait 2067245 00:11:53.544 15:35:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:53.544 15:35:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:53.544 15:35:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:53.544 15:35:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:53.544 15:35:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:53.544 15:35:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.544 15:35:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:53.544 15:35:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.445 15:35:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:11:55.445 00:11:55.445 real 0m7.959s 00:11:55.445 user 0m12.622s 00:11:55.445 sys 0m2.575s 00:11:55.445 15:35:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:55.445 15:35:34 -- common/autotest_common.sh@10 -- # set +x 00:11:55.445 ************************************ 00:11:55.445 END TEST nvmf_abort 00:11:55.445 ************************************ 00:11:55.703 15:35:34 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:55.703 15:35:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:55.703 15:35:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:55.703 15:35:34 -- common/autotest_common.sh@10 -- # set +x 00:11:55.703 ************************************ 00:11:55.704 START TEST nvmf_ns_hotplug_stress 00:11:55.704 ************************************ 00:11:55.704 15:35:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:11:55.704 * Looking for test storage... 00:11:55.704 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:55.704 15:35:34 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:55.704 15:35:34 -- nvmf/common.sh@7 -- # uname -s 00:11:55.704 15:35:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:55.704 15:35:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:55.704 15:35:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:55.704 15:35:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:55.704 15:35:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:55.704 15:35:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:55.704 15:35:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:55.704 15:35:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:55.704 15:35:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:55.704 15:35:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:55.704 15:35:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:55.704 15:35:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:55.704 15:35:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:55.704 15:35:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:55.704 15:35:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:55.704 15:35:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:55.704 15:35:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:55.704 15:35:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:55.704 15:35:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:55.704 15:35:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.704 15:35:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.704 15:35:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.704 15:35:34 -- paths/export.sh@5 -- # export PATH 00:11:55.704 15:35:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:55.704 15:35:34 -- nvmf/common.sh@46 -- # : 0 00:11:55.704 15:35:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:55.704 15:35:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:55.704 15:35:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:55.704 15:35:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:55.704 15:35:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:55.704 15:35:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:55.704 15:35:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:55.704 15:35:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:55.704 15:35:34 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:55.704 15:35:34 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:11:55.704 15:35:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:55.704 15:35:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:55.704 15:35:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:55.704 15:35:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:55.704 15:35:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:55.704 15:35:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.704 15:35:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.704 15:35:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:55.704 15:35:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:11:55.704 15:35:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:11:55.704 15:35:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:11:55.704 15:35:34 -- common/autotest_common.sh@10 -- # set +x 00:11:57.607 15:35:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:57.607 15:35:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:11:57.607 15:35:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:11:57.607 15:35:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:11:57.607 15:35:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:11:57.607 15:35:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:11:57.607 15:35:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:11:57.607 15:35:36 -- nvmf/common.sh@294 -- # net_devs=() 00:11:57.607 15:35:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:11:57.607 15:35:36 -- nvmf/common.sh@295 -- # e810=() 00:11:57.607 15:35:36 -- nvmf/common.sh@295 -- # local -ga e810 00:11:57.607 15:35:36 -- nvmf/common.sh@296 -- # x722=() 00:11:57.607 15:35:36 -- nvmf/common.sh@296 -- # local -ga x722 00:11:57.607 15:35:36 -- nvmf/common.sh@297 -- # mlx=() 00:11:57.607 15:35:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:11:57.607 15:35:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:57.607 15:35:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:57.607 15:35:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:57.607 15:35:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:57.607 15:35:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:57.607 15:35:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:57.607 15:35:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:57.607 15:35:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:57.607 15:35:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:57.607 15:35:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:57.607 15:35:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:57.607 15:35:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:11:57.607 15:35:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:11:57.607 15:35:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:11:57.607 15:35:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:11:57.607 15:35:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:11:57.607 15:35:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:11:57.607 15:35:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:57.607 15:35:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:57.607 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:57.607 15:35:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:57.607 15:35:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:57.607 15:35:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.607 15:35:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.607 15:35:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:57.607 15:35:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:11:57.607 15:35:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:57.607 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:57.607 15:35:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:11:57.607 15:35:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:11:57.607 15:35:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:57.607 15:35:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:57.607 15:35:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:11:57.607 15:35:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:11:57.607 15:35:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:11:57.607 15:35:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:11:57.607 15:35:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:57.607 15:35:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.607 15:35:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:57.607 15:35:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.607 15:35:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:57.607 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:57.607 15:35:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.607 15:35:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:11:57.607 15:35:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:57.607 15:35:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:11:57.607 15:35:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:57.607 15:35:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:57.607 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:57.607 15:35:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:11:57.607 15:35:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:11:57.607 15:35:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:11:57.607 15:35:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:11:57.607 15:35:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:11:57.607 15:35:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:11:57.607 15:35:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:57.607 15:35:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:57.607 15:35:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:57.607 15:35:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:11:57.607 15:35:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:57.607 15:35:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:57.607 15:35:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:11:57.607 15:35:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:57.607 15:35:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:57.607 15:35:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:11:57.607 15:35:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:11:57.607 15:35:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:11:57.607 15:35:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:57.607 15:35:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:57.607 15:35:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:57.607 15:35:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:11:57.607 15:35:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:57.607 15:35:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:57.607 15:35:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:57.607 15:35:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:11:57.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:57.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:11:57.607 00:11:57.607 --- 10.0.0.2 ping statistics --- 00:11:57.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.607 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:11:57.607 15:35:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:57.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:57.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:11:57.607 00:11:57.607 --- 10.0.0.1 ping statistics --- 00:11:57.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:57.607 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:11:57.607 15:35:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:57.607 15:35:36 -- nvmf/common.sh@410 -- # return 0 00:11:57.607 15:35:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:57.607 15:35:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:57.607 15:35:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:57.607 15:35:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:57.607 15:35:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:57.607 15:35:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:57.607 15:35:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:57.607 15:35:36 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:57.607 15:35:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:57.607 15:35:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:57.607 15:35:36 -- common/autotest_common.sh@10 -- # set +x 00:11:57.607 15:35:36 -- nvmf/common.sh@469 -- # nvmfpid=2069596 00:11:57.607 15:35:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:57.607 15:35:36 -- nvmf/common.sh@470 -- # waitforlisten 2069596 00:11:57.607 15:35:36 -- common/autotest_common.sh@819 -- # '[' -z 2069596 ']' 00:11:57.607 15:35:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:57.607 15:35:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:57.607 15:35:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:57.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:57.607 15:35:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:57.607 15:35:36 -- common/autotest_common.sh@10 -- # set +x 00:11:57.607 [2024-07-10 15:35:36.938280] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:11:57.607 [2024-07-10 15:35:36.938361] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.607 EAL: No free 2048 kB hugepages reported on node 1 00:11:57.866 [2024-07-10 15:35:37.003064] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:57.866 [2024-07-10 15:35:37.117384] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:57.866 [2024-07-10 15:35:37.117539] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:57.866 [2024-07-10 15:35:37.117558] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:57.866 [2024-07-10 15:35:37.117570] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:57.866 [2024-07-10 15:35:37.117670] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:57.866 [2024-07-10 15:35:37.117734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:57.866 [2024-07-10 15:35:37.117737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.798 15:35:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:58.799 15:35:37 -- common/autotest_common.sh@852 -- # return 0 00:11:58.799 15:35:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:58.799 15:35:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:58.799 15:35:37 -- common/autotest_common.sh@10 -- # set +x 00:11:58.799 15:35:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.799 15:35:37 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:58.799 15:35:37 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:58.799 [2024-07-10 15:35:38.164546] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:59.056 15:35:38 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:59.056 15:35:38 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:59.314 [2024-07-10 15:35:38.627037] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.314 15:35:38 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:59.571 15:35:38 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:59.829 Malloc0 00:11:59.829 15:35:39 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:00.087 Delay0 00:12:00.087 15:35:39 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:00.345 15:35:39 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:00.602 NULL1 00:12:00.602 15:35:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:00.860 15:35:40 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2069935 00:12:00.860 15:35:40 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:00.860 15:35:40 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:00.860 15:35:40 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:00.860 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.231 Read completed with error (sct=0, sc=11) 00:12:02.231 15:35:41 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:02.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.231 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:02.231 15:35:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:02.231 15:35:41 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:02.488 true 00:12:02.488 15:35:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:02.488 15:35:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.421 15:35:42 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:03.421 15:35:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:03.421 15:35:42 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:03.679 true 00:12:03.679 15:35:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:03.679 15:35:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:03.936 15:35:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:04.192 15:35:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:04.192 15:35:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:04.449 true 00:12:04.449 15:35:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:04.449 15:35:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:05.379 15:35:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:05.379 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:05.636 15:35:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:05.636 15:35:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:05.892 true 00:12:05.892 15:35:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:05.892 15:35:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.149 15:35:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:06.405 15:35:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:06.405 15:35:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:06.662 true 00:12:06.662 15:35:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:06.662 15:35:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:07.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.592 15:35:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:07.592 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:07.849 15:35:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:07.850 15:35:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:08.106 true 00:12:08.106 15:35:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:08.106 15:35:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:08.363 15:35:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:08.620 15:35:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:08.620 15:35:47 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:08.877 true 00:12:08.877 15:35:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:08.877 15:35:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:09.810 15:35:48 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:09.810 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:09.810 15:35:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:09.810 15:35:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:10.068 true 00:12:10.068 15:35:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:10.068 15:35:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:10.326 15:35:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:10.584 15:35:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:10.584 15:35:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:10.842 true 00:12:10.842 15:35:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:10.842 15:35:50 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:11.777 15:35:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.035 15:35:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:12:12.035 15:35:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:12:12.293 true 00:12:12.550 15:35:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:12.550 15:35:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:12.550 15:35:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:12.807 15:35:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:12:12.807 15:35:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:12:13.064 true 00:12:13.064 15:35:52 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:13.064 15:35:52 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:13.322 15:35:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:13.580 15:35:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:12:13.580 15:35:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:12:13.837 true 00:12:13.837 15:35:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:13.837 15:35:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:15.215 15:35:54 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:15.215 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.215 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.215 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.215 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.215 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.215 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:15.215 15:35:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:12:15.215 15:35:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:12:15.548 true 00:12:15.548 15:35:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:15.548 15:35:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:16.499 15:35:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:16.500 15:35:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:12:16.500 15:35:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:12:16.757 true 00:12:16.757 15:35:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:16.757 15:35:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:17.015 15:35:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:17.274 15:35:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:12:17.274 15:35:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:12:17.532 true 00:12:17.532 15:35:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:17.532 15:35:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:18.465 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:18.465 15:35:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:18.723 15:35:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:12:18.723 15:35:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:12:18.981 true 00:12:18.981 15:35:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:18.981 15:35:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:19.239 15:35:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:19.496 15:35:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:12:19.496 15:35:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:12:19.754 true 00:12:19.754 15:35:58 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:19.754 15:35:58 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.687 15:35:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:20.687 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:20.944 15:36:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:12:20.944 15:36:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:12:21.202 true 00:12:21.202 15:36:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:21.202 15:36:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:21.459 15:36:00 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:21.716 15:36:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:12:21.716 15:36:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:12:21.974 true 00:12:21.974 15:36:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:21.974 15:36:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:22.907 15:36:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:22.907 15:36:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:12:22.907 15:36:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:12:23.166 true 00:12:23.166 15:36:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:23.166 15:36:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:23.424 15:36:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:23.681 15:36:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:12:23.681 15:36:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:12:23.938 true 00:12:23.938 15:36:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:23.938 15:36:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.870 15:36:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:25.127 15:36:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:12:25.127 15:36:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:12:25.385 true 00:12:25.385 15:36:04 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:25.385 15:36:04 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.644 15:36:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:25.644 15:36:05 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:12:25.644 15:36:05 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:12:25.901 true 00:12:25.901 15:36:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:25.901 15:36:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:26.834 15:36:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:26.834 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:27.091 15:36:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:12:27.091 15:36:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:12:27.348 true 00:12:27.348 15:36:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:27.348 15:36:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:27.606 15:36:06 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:27.863 15:36:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:12:27.863 15:36:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:12:28.120 true 00:12:28.120 15:36:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:28.120 15:36:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.054 15:36:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:29.054 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:29.312 15:36:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:12:29.312 15:36:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:12:29.569 true 00:12:29.570 15:36:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:29.570 15:36:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:29.827 15:36:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:30.086 15:36:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:12:30.086 15:36:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:12:30.344 true 00:12:30.344 15:36:09 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:30.344 15:36:09 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:31.278 Initializing NVMe Controllers 00:12:31.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:31.278 Controller IO queue size 128, less than required. 00:12:31.278 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:31.278 Controller IO queue size 128, less than required. 00:12:31.278 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:31.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:31.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:12:31.278 Initialization complete. Launching workers. 00:12:31.278 ======================================================== 00:12:31.278 Latency(us) 00:12:31.278 Device Information : IOPS MiB/s Average min max 00:12:31.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 808.13 0.39 89108.89 3644.74 1015007.63 00:12:31.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12875.97 6.29 9940.71 1892.80 361983.01 00:12:31.278 ======================================================== 00:12:31.278 Total : 13684.10 6.68 14616.08 1892.80 1015007.63 00:12:31.278 00:12:31.278 15:36:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:31.536 15:36:10 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:12:31.536 15:36:10 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:12:31.794 true 00:12:31.794 15:36:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 2069935 00:12:31.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2069935) - No such process 00:12:31.794 15:36:11 -- target/ns_hotplug_stress.sh@53 -- # wait 2069935 00:12:31.794 15:36:11 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.052 15:36:11 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:32.310 15:36:11 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:12:32.310 15:36:11 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:12:32.310 15:36:11 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:12:32.310 15:36:11 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:32.310 15:36:11 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:12:32.568 null0 00:12:32.568 15:36:11 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:32.568 15:36:11 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:32.568 15:36:11 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:12:32.826 null1 00:12:32.826 15:36:11 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:32.826 15:36:11 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:32.826 15:36:11 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:12:33.084 null2 00:12:33.084 15:36:12 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:33.084 15:36:12 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:33.085 15:36:12 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:12:33.085 null3 00:12:33.085 15:36:12 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:33.085 15:36:12 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:33.085 15:36:12 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:12:33.343 null4 00:12:33.343 15:36:12 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:33.343 15:36:12 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:33.343 15:36:12 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:12:33.600 null5 00:12:33.600 15:36:12 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:33.600 15:36:12 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:33.600 15:36:12 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:12:33.857 null6 00:12:33.857 15:36:13 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:33.857 15:36:13 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:33.857 15:36:13 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:12:34.114 null7 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:34.114 15:36:13 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:34.115 15:36:13 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:12:34.115 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:34.115 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:34.115 15:36:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:34.115 15:36:13 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:12:34.115 15:36:13 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:12:34.115 15:36:13 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:12:34.115 15:36:13 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:12:34.115 15:36:13 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:12:34.115 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:12:34.115 15:36:13 -- target/ns_hotplug_stress.sh@66 -- # wait 2074711 2074712 2074714 2074716 2074718 2074720 2074722 2074724 00:12:34.115 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:34.115 15:36:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:34.372 15:36:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:34.372 15:36:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.372 15:36:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:34.372 15:36:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:34.372 15:36:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:34.372 15:36:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:34.372 15:36:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:34.372 15:36:13 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:34.629 15:36:13 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:34.887 15:36:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:34.887 15:36:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:34.887 15:36:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:34.887 15:36:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:34.887 15:36:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:34.887 15:36:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:34.887 15:36:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:34.887 15:36:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:35.146 15:36:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:35.405 15:36:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:35.405 15:36:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:35.405 15:36:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:35.405 15:36:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:35.405 15:36:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:35.405 15:36:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:35.405 15:36:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:35.405 15:36:14 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:35.663 15:36:14 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:35.921 15:36:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:35.921 15:36:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:35.921 15:36:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:35.921 15:36:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:35.921 15:36:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:35.921 15:36:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:35.921 15:36:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:35.921 15:36:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.179 15:36:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:36.180 15:36:15 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:36.437 15:36:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:36.437 15:36:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:36.437 15:36:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:36.437 15:36:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:36.437 15:36:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:36.437 15:36:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:36.437 15:36:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.438 15:36:15 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:36.695 15:36:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:36.953 15:36:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:36.953 15:36:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:36.953 15:36:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:36.953 15:36:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:36.953 15:36:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:36.953 15:36:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:36.953 15:36:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:36.953 15:36:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.211 15:36:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:37.469 15:36:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:37.469 15:36:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:37.469 15:36:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:37.469 15:36:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.470 15:36:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:37.470 15:36:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:37.470 15:36:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:37.470 15:36:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:37.727 15:36:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:37.984 15:36:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:37.984 15:36:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:37.984 15:36:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:37.984 15:36:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:37.984 15:36:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:37.984 15:36:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:37.984 15:36:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:37.984 15:36:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:38.241 15:36:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:38.498 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:38.498 15:36:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:38.498 15:36:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:38.498 15:36:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:38.498 15:36:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:38.498 15:36:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:38.498 15:36:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:38.755 15:36:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:38.755 15:36:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:38.755 15:36:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:38.755 15:36:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:38.755 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:38.755 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:38.755 15:36:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:12:38.755 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:38.755 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:38.755 15:36:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:12:38.755 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:38.755 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:38.755 15:36:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:12:39.012 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.012 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.012 15:36:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:12:39.012 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.012 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.012 15:36:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:12:39.012 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.012 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.012 15:36:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:12:39.012 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.012 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.012 15:36:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:12:39.012 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.012 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.013 15:36:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:12:39.013 15:36:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:39.013 15:36:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:39.270 15:36:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:12:39.270 15:36:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:39.270 15:36:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:12:39.270 15:36:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:12:39.270 15:36:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:12:39.270 15:36:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:12:39.270 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.270 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.270 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.270 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.526 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.526 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.526 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.526 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.526 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.526 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.526 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.526 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.526 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.526 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.526 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:12:39.526 15:36:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:12:39.526 15:36:18 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:39.526 15:36:18 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:12:39.526 15:36:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:39.526 15:36:18 -- nvmf/common.sh@116 -- # sync 00:12:39.526 15:36:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:39.526 15:36:18 -- nvmf/common.sh@119 -- # set +e 00:12:39.526 15:36:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:39.526 15:36:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:39.526 rmmod nvme_tcp 00:12:39.526 rmmod nvme_fabrics 00:12:39.526 rmmod nvme_keyring 00:12:39.526 15:36:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:39.526 15:36:18 -- nvmf/common.sh@123 -- # set -e 00:12:39.526 15:36:18 -- nvmf/common.sh@124 -- # return 0 00:12:39.526 15:36:18 -- nvmf/common.sh@477 -- # '[' -n 2069596 ']' 00:12:39.526 15:36:18 -- nvmf/common.sh@478 -- # killprocess 2069596 00:12:39.526 15:36:18 -- common/autotest_common.sh@926 -- # '[' -z 2069596 ']' 00:12:39.526 15:36:18 -- common/autotest_common.sh@930 -- # kill -0 2069596 00:12:39.526 15:36:18 -- common/autotest_common.sh@931 -- # uname 00:12:39.526 15:36:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:39.526 15:36:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2069596 00:12:39.526 15:36:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:39.526 15:36:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:39.526 15:36:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2069596' 00:12:39.526 killing process with pid 2069596 00:12:39.527 15:36:18 -- common/autotest_common.sh@945 -- # kill 2069596 00:12:39.527 15:36:18 -- common/autotest_common.sh@950 -- # wait 2069596 00:12:39.785 15:36:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:39.785 15:36:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:39.785 15:36:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:39.785 15:36:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:39.785 15:36:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:39.785 15:36:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.785 15:36:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.785 15:36:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.315 15:36:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:42.315 00:12:42.315 real 0m46.292s 00:12:42.315 user 3m21.156s 00:12:42.315 sys 0m19.274s 00:12:42.315 15:36:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:42.315 15:36:21 -- common/autotest_common.sh@10 -- # set +x 00:12:42.315 ************************************ 00:12:42.315 END TEST nvmf_ns_hotplug_stress 00:12:42.315 ************************************ 00:12:42.315 15:36:21 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:42.315 15:36:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:42.315 15:36:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:42.315 15:36:21 -- common/autotest_common.sh@10 -- # set +x 00:12:42.315 ************************************ 00:12:42.315 START TEST nvmf_connect_stress 00:12:42.315 ************************************ 00:12:42.315 15:36:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:42.315 * Looking for test storage... 00:12:42.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.315 15:36:21 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.315 15:36:21 -- nvmf/common.sh@7 -- # uname -s 00:12:42.315 15:36:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.315 15:36:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.315 15:36:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.315 15:36:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.315 15:36:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:42.315 15:36:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:42.315 15:36:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.315 15:36:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:42.315 15:36:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.315 15:36:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:42.315 15:36:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:42.315 15:36:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:42.315 15:36:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.315 15:36:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:42.315 15:36:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:42.315 15:36:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.315 15:36:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.315 15:36:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.315 15:36:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.315 15:36:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.315 15:36:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.315 15:36:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.315 15:36:21 -- paths/export.sh@5 -- # export PATH 00:12:42.315 15:36:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.315 15:36:21 -- nvmf/common.sh@46 -- # : 0 00:12:42.315 15:36:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:42.315 15:36:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:42.315 15:36:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:42.315 15:36:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.315 15:36:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.315 15:36:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:42.315 15:36:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:42.315 15:36:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:42.315 15:36:21 -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:42.315 15:36:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:42.315 15:36:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.315 15:36:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:42.315 15:36:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:42.316 15:36:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:42.316 15:36:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.316 15:36:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.316 15:36:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:42.316 15:36:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:42.316 15:36:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:42.316 15:36:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:42.316 15:36:21 -- common/autotest_common.sh@10 -- # set +x 00:12:44.216 15:36:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:44.216 15:36:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:12:44.216 15:36:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:12:44.216 15:36:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:12:44.216 15:36:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:12:44.216 15:36:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:12:44.216 15:36:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:12:44.216 15:36:23 -- nvmf/common.sh@294 -- # net_devs=() 00:12:44.216 15:36:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:12:44.216 15:36:23 -- nvmf/common.sh@295 -- # e810=() 00:12:44.216 15:36:23 -- nvmf/common.sh@295 -- # local -ga e810 00:12:44.216 15:36:23 -- nvmf/common.sh@296 -- # x722=() 00:12:44.216 15:36:23 -- nvmf/common.sh@296 -- # local -ga x722 00:12:44.216 15:36:23 -- nvmf/common.sh@297 -- # mlx=() 00:12:44.216 15:36:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:12:44.216 15:36:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:44.216 15:36:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:44.216 15:36:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:44.216 15:36:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:44.216 15:36:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:44.216 15:36:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:44.216 15:36:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:44.216 15:36:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:44.216 15:36:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:44.216 15:36:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:44.216 15:36:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:44.216 15:36:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:12:44.216 15:36:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:12:44.216 15:36:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:12:44.216 15:36:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:12:44.216 15:36:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:12:44.216 15:36:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:12:44.216 15:36:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:44.216 15:36:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:44.216 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:44.216 15:36:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:44.216 15:36:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:44.216 15:36:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.216 15:36:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.216 15:36:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:44.216 15:36:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:12:44.216 15:36:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:44.216 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:44.216 15:36:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:12:44.216 15:36:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:12:44.216 15:36:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:44.216 15:36:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:44.216 15:36:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:12:44.216 15:36:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:12:44.216 15:36:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:12:44.216 15:36:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:12:44.216 15:36:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:44.216 15:36:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.216 15:36:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:44.216 15:36:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.216 15:36:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:44.216 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:44.216 15:36:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.216 15:36:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:12:44.216 15:36:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:44.216 15:36:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:12:44.216 15:36:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:44.216 15:36:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:44.216 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:44.216 15:36:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:12:44.216 15:36:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:12:44.216 15:36:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:12:44.216 15:36:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:12:44.216 15:36:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:12:44.216 15:36:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:12:44.216 15:36:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:44.216 15:36:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:44.216 15:36:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:44.216 15:36:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:12:44.216 15:36:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.216 15:36:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.216 15:36:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:12:44.216 15:36:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.216 15:36:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.216 15:36:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:12:44.217 15:36:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:12:44.217 15:36:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.217 15:36:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.217 15:36:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.217 15:36:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.217 15:36:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:12:44.217 15:36:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.217 15:36:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.217 15:36:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.217 15:36:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:12:44.217 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.217 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:12:44.217 00:12:44.217 --- 10.0.0.2 ping statistics --- 00:12:44.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.217 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:12:44.217 15:36:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.217 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.217 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.093 ms 00:12:44.217 00:12:44.217 --- 10.0.0.1 ping statistics --- 00:12:44.217 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.217 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:12:44.217 15:36:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.217 15:36:23 -- nvmf/common.sh@410 -- # return 0 00:12:44.217 15:36:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:44.217 15:36:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.217 15:36:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:44.217 15:36:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:44.217 15:36:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.217 15:36:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:44.217 15:36:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:44.217 15:36:23 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:44.217 15:36:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:44.217 15:36:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:44.217 15:36:23 -- common/autotest_common.sh@10 -- # set +x 00:12:44.217 15:36:23 -- nvmf/common.sh@469 -- # nvmfpid=2077496 00:12:44.217 15:36:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:44.217 15:36:23 -- nvmf/common.sh@470 -- # waitforlisten 2077496 00:12:44.217 15:36:23 -- common/autotest_common.sh@819 -- # '[' -z 2077496 ']' 00:12:44.217 15:36:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.217 15:36:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:44.217 15:36:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.217 15:36:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:44.217 15:36:23 -- common/autotest_common.sh@10 -- # set +x 00:12:44.217 [2024-07-10 15:36:23.479737] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:44.217 [2024-07-10 15:36:23.479819] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.217 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.217 [2024-07-10 15:36:23.550184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:44.476 [2024-07-10 15:36:23.667117] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:44.476 [2024-07-10 15:36:23.667253] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.476 [2024-07-10 15:36:23.667271] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.476 [2024-07-10 15:36:23.667284] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.476 [2024-07-10 15:36:23.667340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.476 [2024-07-10 15:36:23.667396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.476 [2024-07-10 15:36:23.667399] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.100 15:36:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:45.100 15:36:24 -- common/autotest_common.sh@852 -- # return 0 00:12:45.100 15:36:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:45.100 15:36:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:45.100 15:36:24 -- common/autotest_common.sh@10 -- # set +x 00:12:45.100 15:36:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.100 15:36:24 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:45.100 15:36:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.100 15:36:24 -- common/autotest_common.sh@10 -- # set +x 00:12:45.100 [2024-07-10 15:36:24.434366] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.100 15:36:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.100 15:36:24 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:45.100 15:36:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.100 15:36:24 -- common/autotest_common.sh@10 -- # set +x 00:12:45.100 15:36:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.100 15:36:24 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:45.100 15:36:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.100 15:36:24 -- common/autotest_common.sh@10 -- # set +x 00:12:45.357 [2024-07-10 15:36:24.465550] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:45.357 15:36:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.357 15:36:24 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:45.357 15:36:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.357 15:36:24 -- common/autotest_common.sh@10 -- # set +x 00:12:45.357 NULL1 00:12:45.357 15:36:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.357 15:36:24 -- target/connect_stress.sh@21 -- # PERF_PID=2077661 00:12:45.357 15:36:24 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:45.357 15:36:24 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:45.357 15:36:24 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # seq 1 20 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:45.357 15:36:24 -- target/connect_stress.sh@28 -- # cat 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:45.357 15:36:24 -- target/connect_stress.sh@28 -- # cat 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:45.357 15:36:24 -- target/connect_stress.sh@28 -- # cat 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:45.357 15:36:24 -- target/connect_stress.sh@28 -- # cat 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:45.357 15:36:24 -- target/connect_stress.sh@28 -- # cat 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:45.357 15:36:24 -- target/connect_stress.sh@28 -- # cat 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:45.357 15:36:24 -- target/connect_stress.sh@28 -- # cat 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:45.357 15:36:24 -- target/connect_stress.sh@28 -- # cat 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:45.357 15:36:24 -- target/connect_stress.sh@28 -- # cat 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:45.357 15:36:24 -- target/connect_stress.sh@28 -- # cat 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:45.357 15:36:24 -- target/connect_stress.sh@28 -- # cat 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:45.357 15:36:24 -- target/connect_stress.sh@28 -- # cat 00:12:45.357 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:45.357 15:36:24 -- target/connect_stress.sh@28 -- # cat 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:45.357 15:36:24 -- target/connect_stress.sh@28 -- # cat 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:45.357 15:36:24 -- target/connect_stress.sh@28 -- # cat 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:45.357 15:36:24 -- target/connect_stress.sh@28 -- # cat 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:45.357 15:36:24 -- target/connect_stress.sh@28 -- # cat 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:45.357 15:36:24 -- target/connect_stress.sh@28 -- # cat 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:45.357 15:36:24 -- target/connect_stress.sh@28 -- # cat 00:12:45.357 15:36:24 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:45.357 15:36:24 -- target/connect_stress.sh@28 -- # cat 00:12:45.357 15:36:24 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:45.357 15:36:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.357 15:36:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.357 15:36:24 -- common/autotest_common.sh@10 -- # set +x 00:12:45.615 15:36:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.615 15:36:24 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:45.615 15:36:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.615 15:36:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.615 15:36:24 -- common/autotest_common.sh@10 -- # set +x 00:12:45.872 15:36:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:45.872 15:36:25 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:45.872 15:36:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:45.872 15:36:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:45.872 15:36:25 -- common/autotest_common.sh@10 -- # set +x 00:12:46.129 15:36:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.129 15:36:25 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:46.130 15:36:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.130 15:36:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.130 15:36:25 -- common/autotest_common.sh@10 -- # set +x 00:12:46.694 15:36:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.694 15:36:25 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:46.694 15:36:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.694 15:36:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.694 15:36:25 -- common/autotest_common.sh@10 -- # set +x 00:12:46.951 15:36:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:46.951 15:36:26 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:46.951 15:36:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:46.951 15:36:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:46.951 15:36:26 -- common/autotest_common.sh@10 -- # set +x 00:12:47.208 15:36:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.208 15:36:26 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:47.208 15:36:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.208 15:36:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.208 15:36:26 -- common/autotest_common.sh@10 -- # set +x 00:12:47.464 15:36:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.464 15:36:26 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:47.464 15:36:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.464 15:36:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.464 15:36:26 -- common/autotest_common.sh@10 -- # set +x 00:12:47.721 15:36:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:47.721 15:36:27 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:47.721 15:36:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:47.721 15:36:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:47.721 15:36:27 -- common/autotest_common.sh@10 -- # set +x 00:12:48.284 15:36:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.284 15:36:27 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:48.284 15:36:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.284 15:36:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.284 15:36:27 -- common/autotest_common.sh@10 -- # set +x 00:12:48.541 15:36:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.541 15:36:27 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:48.541 15:36:27 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.541 15:36:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.541 15:36:27 -- common/autotest_common.sh@10 -- # set +x 00:12:48.798 15:36:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:48.798 15:36:28 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:48.798 15:36:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:48.798 15:36:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:48.798 15:36:28 -- common/autotest_common.sh@10 -- # set +x 00:12:49.055 15:36:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.055 15:36:28 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:49.055 15:36:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.055 15:36:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.055 15:36:28 -- common/autotest_common.sh@10 -- # set +x 00:12:49.620 15:36:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.620 15:36:28 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:49.620 15:36:28 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.620 15:36:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.620 15:36:28 -- common/autotest_common.sh@10 -- # set +x 00:12:49.878 15:36:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:49.878 15:36:29 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:49.878 15:36:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:49.878 15:36:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:49.878 15:36:29 -- common/autotest_common.sh@10 -- # set +x 00:12:50.135 15:36:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.135 15:36:29 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:50.135 15:36:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:50.135 15:36:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.135 15:36:29 -- common/autotest_common.sh@10 -- # set +x 00:12:50.393 15:36:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.393 15:36:29 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:50.393 15:36:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:50.393 15:36:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.393 15:36:29 -- common/autotest_common.sh@10 -- # set +x 00:12:50.650 15:36:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.650 15:36:29 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:50.650 15:36:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:50.650 15:36:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.650 15:36:29 -- common/autotest_common.sh@10 -- # set +x 00:12:51.216 15:36:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.216 15:36:30 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:51.216 15:36:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.216 15:36:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.216 15:36:30 -- common/autotest_common.sh@10 -- # set +x 00:12:51.472 15:36:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.472 15:36:30 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:51.472 15:36:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.472 15:36:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.472 15:36:30 -- common/autotest_common.sh@10 -- # set +x 00:12:51.729 15:36:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.729 15:36:30 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:51.729 15:36:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.729 15:36:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.729 15:36:30 -- common/autotest_common.sh@10 -- # set +x 00:12:51.986 15:36:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:51.986 15:36:31 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:51.986 15:36:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:51.986 15:36:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:51.986 15:36:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.243 15:36:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.243 15:36:31 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:52.243 15:36:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.243 15:36:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.243 15:36:31 -- common/autotest_common.sh@10 -- # set +x 00:12:52.808 15:36:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.808 15:36:31 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:52.808 15:36:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:52.808 15:36:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.808 15:36:31 -- common/autotest_common.sh@10 -- # set +x 00:12:53.066 15:36:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.066 15:36:32 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:53.066 15:36:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.066 15:36:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.066 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:12:53.323 15:36:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.323 15:36:32 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:53.323 15:36:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.323 15:36:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.323 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:12:53.581 15:36:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.581 15:36:32 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:53.581 15:36:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.581 15:36:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.581 15:36:32 -- common/autotest_common.sh@10 -- # set +x 00:12:53.838 15:36:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:53.838 15:36:33 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:53.838 15:36:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:53.838 15:36:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:53.838 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:12:54.403 15:36:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.403 15:36:33 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:54.403 15:36:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.403 15:36:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.403 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:12:54.661 15:36:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.661 15:36:33 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:54.661 15:36:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.661 15:36:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.661 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:12:54.918 15:36:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:54.918 15:36:34 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:54.918 15:36:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:54.918 15:36:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:54.918 15:36:34 -- common/autotest_common.sh@10 -- # set +x 00:12:55.176 15:36:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.176 15:36:34 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:55.176 15:36:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:55.176 15:36:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:55.176 15:36:34 -- common/autotest_common.sh@10 -- # set +x 00:12:55.434 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:55.434 15:36:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:55.434 15:36:34 -- target/connect_stress.sh@34 -- # kill -0 2077661 00:12:55.434 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2077661) - No such process 00:12:55.434 15:36:34 -- target/connect_stress.sh@38 -- # wait 2077661 00:12:55.434 15:36:34 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:55.434 15:36:34 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:55.434 15:36:34 -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:55.434 15:36:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:55.434 15:36:34 -- nvmf/common.sh@116 -- # sync 00:12:55.692 15:36:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:55.692 15:36:34 -- nvmf/common.sh@119 -- # set +e 00:12:55.692 15:36:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:55.692 15:36:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:55.692 rmmod nvme_tcp 00:12:55.692 rmmod nvme_fabrics 00:12:55.692 rmmod nvme_keyring 00:12:55.692 15:36:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:55.692 15:36:34 -- nvmf/common.sh@123 -- # set -e 00:12:55.692 15:36:34 -- nvmf/common.sh@124 -- # return 0 00:12:55.692 15:36:34 -- nvmf/common.sh@477 -- # '[' -n 2077496 ']' 00:12:55.692 15:36:34 -- nvmf/common.sh@478 -- # killprocess 2077496 00:12:55.692 15:36:34 -- common/autotest_common.sh@926 -- # '[' -z 2077496 ']' 00:12:55.692 15:36:34 -- common/autotest_common.sh@930 -- # kill -0 2077496 00:12:55.692 15:36:34 -- common/autotest_common.sh@931 -- # uname 00:12:55.692 15:36:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:55.692 15:36:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2077496 00:12:55.692 15:36:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:55.692 15:36:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:55.692 15:36:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2077496' 00:12:55.692 killing process with pid 2077496 00:12:55.692 15:36:34 -- common/autotest_common.sh@945 -- # kill 2077496 00:12:55.692 15:36:34 -- common/autotest_common.sh@950 -- # wait 2077496 00:12:55.950 15:36:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:55.950 15:36:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:55.950 15:36:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:55.950 15:36:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:55.950 15:36:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:55.950 15:36:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.950 15:36:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.950 15:36:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:57.854 15:36:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:12:57.854 00:12:57.854 real 0m16.060s 00:12:57.854 user 0m40.411s 00:12:57.854 sys 0m5.982s 00:12:57.854 15:36:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:57.854 15:36:37 -- common/autotest_common.sh@10 -- # set +x 00:12:57.854 ************************************ 00:12:57.854 END TEST nvmf_connect_stress 00:12:57.854 ************************************ 00:12:57.854 15:36:37 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:57.854 15:36:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:57.854 15:36:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:57.854 15:36:37 -- common/autotest_common.sh@10 -- # set +x 00:12:57.854 ************************************ 00:12:57.854 START TEST nvmf_fused_ordering 00:12:57.854 ************************************ 00:12:57.854 15:36:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:58.112 * Looking for test storage... 00:12:58.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:58.112 15:36:37 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:58.112 15:36:37 -- nvmf/common.sh@7 -- # uname -s 00:12:58.112 15:36:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:58.112 15:36:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:58.112 15:36:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:58.112 15:36:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:58.112 15:36:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:58.112 15:36:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:58.112 15:36:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:58.112 15:36:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:58.112 15:36:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:58.112 15:36:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:58.112 15:36:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:58.112 15:36:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:58.112 15:36:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:58.112 15:36:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:58.112 15:36:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:58.112 15:36:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:58.112 15:36:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:58.112 15:36:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:58.112 15:36:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:58.112 15:36:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.112 15:36:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.112 15:36:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.112 15:36:37 -- paths/export.sh@5 -- # export PATH 00:12:58.112 15:36:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:58.112 15:36:37 -- nvmf/common.sh@46 -- # : 0 00:12:58.112 15:36:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:58.112 15:36:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:58.112 15:36:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:58.112 15:36:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:58.112 15:36:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:58.112 15:36:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:58.112 15:36:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:58.112 15:36:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:58.112 15:36:37 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:58.112 15:36:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:58.112 15:36:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:58.112 15:36:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:58.112 15:36:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:58.112 15:36:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:58.112 15:36:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.112 15:36:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.112 15:36:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:58.112 15:36:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:12:58.112 15:36:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:12:58.112 15:36:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:12:58.112 15:36:37 -- common/autotest_common.sh@10 -- # set +x 00:13:00.013 15:36:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:00.013 15:36:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:00.013 15:36:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:00.013 15:36:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:00.013 15:36:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:00.013 15:36:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:00.013 15:36:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:00.013 15:36:39 -- nvmf/common.sh@294 -- # net_devs=() 00:13:00.013 15:36:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:00.013 15:36:39 -- nvmf/common.sh@295 -- # e810=() 00:13:00.013 15:36:39 -- nvmf/common.sh@295 -- # local -ga e810 00:13:00.013 15:36:39 -- nvmf/common.sh@296 -- # x722=() 00:13:00.013 15:36:39 -- nvmf/common.sh@296 -- # local -ga x722 00:13:00.013 15:36:39 -- nvmf/common.sh@297 -- # mlx=() 00:13:00.013 15:36:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:00.013 15:36:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:00.013 15:36:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:00.013 15:36:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:00.013 15:36:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:00.013 15:36:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:00.013 15:36:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:00.013 15:36:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:00.013 15:36:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:00.013 15:36:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:00.013 15:36:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:00.013 15:36:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:00.013 15:36:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:00.013 15:36:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:00.013 15:36:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:00.013 15:36:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:00.013 15:36:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:00.013 15:36:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:00.013 15:36:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:00.013 15:36:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:00.013 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:00.013 15:36:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:00.013 15:36:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:00.013 15:36:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.013 15:36:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.013 15:36:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:00.013 15:36:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:00.013 15:36:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:00.013 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:00.013 15:36:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:00.013 15:36:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:00.013 15:36:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:00.013 15:36:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:00.013 15:36:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:00.013 15:36:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:00.013 15:36:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:00.013 15:36:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:00.013 15:36:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:00.013 15:36:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.013 15:36:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:00.013 15:36:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.013 15:36:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:00.013 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:00.013 15:36:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.013 15:36:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:00.013 15:36:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:00.013 15:36:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:00.013 15:36:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:00.013 15:36:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:00.013 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:00.013 15:36:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:00.013 15:36:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:00.013 15:36:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:00.014 15:36:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:00.014 15:36:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:00.014 15:36:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:00.014 15:36:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:00.014 15:36:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:00.014 15:36:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:00.014 15:36:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:00.014 15:36:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:00.014 15:36:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:00.014 15:36:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:00.014 15:36:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:00.014 15:36:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:00.014 15:36:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:00.014 15:36:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:00.014 15:36:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:00.014 15:36:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:00.014 15:36:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:00.014 15:36:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:00.014 15:36:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:00.014 15:36:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:00.014 15:36:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:00.014 15:36:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:00.014 15:36:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:00.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:00.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:13:00.014 00:13:00.014 --- 10.0.0.2 ping statistics --- 00:13:00.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.014 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:13:00.014 15:36:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:00.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:00.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:13:00.014 00:13:00.014 --- 10.0.0.1 ping statistics --- 00:13:00.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:00.014 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:13:00.014 15:36:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:00.014 15:36:39 -- nvmf/common.sh@410 -- # return 0 00:13:00.014 15:36:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:00.014 15:36:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:00.014 15:36:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:00.014 15:36:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:00.014 15:36:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:00.014 15:36:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:00.014 15:36:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:00.273 15:36:39 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:00.273 15:36:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:00.273 15:36:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:00.273 15:36:39 -- common/autotest_common.sh@10 -- # set +x 00:13:00.273 15:36:39 -- nvmf/common.sh@469 -- # nvmfpid=2080856 00:13:00.273 15:36:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:00.273 15:36:39 -- nvmf/common.sh@470 -- # waitforlisten 2080856 00:13:00.273 15:36:39 -- common/autotest_common.sh@819 -- # '[' -z 2080856 ']' 00:13:00.273 15:36:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:00.273 15:36:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:00.273 15:36:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:00.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:00.273 15:36:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:00.273 15:36:39 -- common/autotest_common.sh@10 -- # set +x 00:13:00.273 [2024-07-10 15:36:39.451901] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:00.273 [2024-07-10 15:36:39.451968] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:00.273 EAL: No free 2048 kB hugepages reported on node 1 00:13:00.273 [2024-07-10 15:36:39.518247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.273 [2024-07-10 15:36:39.638643] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:00.273 [2024-07-10 15:36:39.638787] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:00.273 [2024-07-10 15:36:39.638804] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:00.273 [2024-07-10 15:36:39.638816] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:00.273 [2024-07-10 15:36:39.638844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.206 15:36:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:01.206 15:36:40 -- common/autotest_common.sh@852 -- # return 0 00:13:01.206 15:36:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:01.206 15:36:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:01.206 15:36:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.206 15:36:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.206 15:36:40 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:01.206 15:36:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.206 15:36:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.206 [2024-07-10 15:36:40.451085] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:01.206 15:36:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.206 15:36:40 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:01.206 15:36:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.206 15:36:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.206 15:36:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.206 15:36:40 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.206 15:36:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.206 15:36:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.206 [2024-07-10 15:36:40.467252] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.206 15:36:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.206 15:36:40 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:01.206 15:36:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.206 15:36:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.206 NULL1 00:13:01.206 15:36:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.206 15:36:40 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:01.206 15:36:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.206 15:36:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.206 15:36:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.206 15:36:40 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:01.206 15:36:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:01.206 15:36:40 -- common/autotest_common.sh@10 -- # set +x 00:13:01.206 15:36:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:01.206 15:36:40 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:01.206 [2024-07-10 15:36:40.512386] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:01.206 [2024-07-10 15:36:40.512441] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081012 ] 00:13:01.206 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.771 Attached to nqn.2016-06.io.spdk:cnode1 00:13:01.771 Namespace ID: 1 size: 1GB 00:13:01.771 fused_ordering(0) 00:13:01.771 fused_ordering(1) 00:13:01.771 fused_ordering(2) 00:13:01.771 fused_ordering(3) 00:13:01.772 fused_ordering(4) 00:13:01.772 fused_ordering(5) 00:13:01.772 fused_ordering(6) 00:13:01.772 fused_ordering(7) 00:13:01.772 fused_ordering(8) 00:13:01.772 fused_ordering(9) 00:13:01.772 fused_ordering(10) 00:13:01.772 fused_ordering(11) 00:13:01.772 fused_ordering(12) 00:13:01.772 fused_ordering(13) 00:13:01.772 fused_ordering(14) 00:13:01.772 fused_ordering(15) 00:13:01.772 fused_ordering(16) 00:13:01.772 fused_ordering(17) 00:13:01.772 fused_ordering(18) 00:13:01.772 fused_ordering(19) 00:13:01.772 fused_ordering(20) 00:13:01.772 fused_ordering(21) 00:13:01.772 fused_ordering(22) 00:13:01.772 fused_ordering(23) 00:13:01.772 fused_ordering(24) 00:13:01.772 fused_ordering(25) 00:13:01.772 fused_ordering(26) 00:13:01.772 fused_ordering(27) 00:13:01.772 fused_ordering(28) 00:13:01.772 fused_ordering(29) 00:13:01.772 fused_ordering(30) 00:13:01.772 fused_ordering(31) 00:13:01.772 fused_ordering(32) 00:13:01.772 fused_ordering(33) 00:13:01.772 fused_ordering(34) 00:13:01.772 fused_ordering(35) 00:13:01.772 fused_ordering(36) 00:13:01.772 fused_ordering(37) 00:13:01.772 fused_ordering(38) 00:13:01.772 fused_ordering(39) 00:13:01.772 fused_ordering(40) 00:13:01.772 fused_ordering(41) 00:13:01.772 fused_ordering(42) 00:13:01.772 fused_ordering(43) 00:13:01.772 fused_ordering(44) 00:13:01.772 fused_ordering(45) 00:13:01.772 fused_ordering(46) 00:13:01.772 fused_ordering(47) 00:13:01.772 fused_ordering(48) 00:13:01.772 fused_ordering(49) 00:13:01.772 fused_ordering(50) 00:13:01.772 fused_ordering(51) 00:13:01.772 fused_ordering(52) 00:13:01.772 fused_ordering(53) 00:13:01.772 fused_ordering(54) 00:13:01.772 fused_ordering(55) 00:13:01.772 fused_ordering(56) 00:13:01.772 fused_ordering(57) 00:13:01.772 fused_ordering(58) 00:13:01.772 fused_ordering(59) 00:13:01.772 fused_ordering(60) 00:13:01.772 fused_ordering(61) 00:13:01.772 fused_ordering(62) 00:13:01.772 fused_ordering(63) 00:13:01.772 fused_ordering(64) 00:13:01.772 fused_ordering(65) 00:13:01.772 fused_ordering(66) 00:13:01.772 fused_ordering(67) 00:13:01.772 fused_ordering(68) 00:13:01.772 fused_ordering(69) 00:13:01.772 fused_ordering(70) 00:13:01.772 fused_ordering(71) 00:13:01.772 fused_ordering(72) 00:13:01.772 fused_ordering(73) 00:13:01.772 fused_ordering(74) 00:13:01.772 fused_ordering(75) 00:13:01.772 fused_ordering(76) 00:13:01.772 fused_ordering(77) 00:13:01.772 fused_ordering(78) 00:13:01.772 fused_ordering(79) 00:13:01.772 fused_ordering(80) 00:13:01.772 fused_ordering(81) 00:13:01.772 fused_ordering(82) 00:13:01.772 fused_ordering(83) 00:13:01.772 fused_ordering(84) 00:13:01.772 fused_ordering(85) 00:13:01.772 fused_ordering(86) 00:13:01.772 fused_ordering(87) 00:13:01.772 fused_ordering(88) 00:13:01.772 fused_ordering(89) 00:13:01.772 fused_ordering(90) 00:13:01.772 fused_ordering(91) 00:13:01.772 fused_ordering(92) 00:13:01.772 fused_ordering(93) 00:13:01.772 fused_ordering(94) 00:13:01.772 fused_ordering(95) 00:13:01.772 fused_ordering(96) 00:13:01.772 fused_ordering(97) 00:13:01.772 fused_ordering(98) 00:13:01.772 fused_ordering(99) 00:13:01.772 fused_ordering(100) 00:13:01.772 fused_ordering(101) 00:13:01.772 fused_ordering(102) 00:13:01.772 fused_ordering(103) 00:13:01.772 fused_ordering(104) 00:13:01.772 fused_ordering(105) 00:13:01.772 fused_ordering(106) 00:13:01.772 fused_ordering(107) 00:13:01.772 fused_ordering(108) 00:13:01.772 fused_ordering(109) 00:13:01.772 fused_ordering(110) 00:13:01.772 fused_ordering(111) 00:13:01.772 fused_ordering(112) 00:13:01.772 fused_ordering(113) 00:13:01.772 fused_ordering(114) 00:13:01.772 fused_ordering(115) 00:13:01.772 fused_ordering(116) 00:13:01.772 fused_ordering(117) 00:13:01.772 fused_ordering(118) 00:13:01.772 fused_ordering(119) 00:13:01.772 fused_ordering(120) 00:13:01.772 fused_ordering(121) 00:13:01.772 fused_ordering(122) 00:13:01.772 fused_ordering(123) 00:13:01.772 fused_ordering(124) 00:13:01.772 fused_ordering(125) 00:13:01.772 fused_ordering(126) 00:13:01.772 fused_ordering(127) 00:13:01.772 fused_ordering(128) 00:13:01.772 fused_ordering(129) 00:13:01.772 fused_ordering(130) 00:13:01.772 fused_ordering(131) 00:13:01.772 fused_ordering(132) 00:13:01.772 fused_ordering(133) 00:13:01.772 fused_ordering(134) 00:13:01.772 fused_ordering(135) 00:13:01.772 fused_ordering(136) 00:13:01.772 fused_ordering(137) 00:13:01.772 fused_ordering(138) 00:13:01.772 fused_ordering(139) 00:13:01.772 fused_ordering(140) 00:13:01.772 fused_ordering(141) 00:13:01.772 fused_ordering(142) 00:13:01.772 fused_ordering(143) 00:13:01.772 fused_ordering(144) 00:13:01.772 fused_ordering(145) 00:13:01.772 fused_ordering(146) 00:13:01.772 fused_ordering(147) 00:13:01.772 fused_ordering(148) 00:13:01.772 fused_ordering(149) 00:13:01.772 fused_ordering(150) 00:13:01.772 fused_ordering(151) 00:13:01.772 fused_ordering(152) 00:13:01.772 fused_ordering(153) 00:13:01.772 fused_ordering(154) 00:13:01.772 fused_ordering(155) 00:13:01.772 fused_ordering(156) 00:13:01.772 fused_ordering(157) 00:13:01.772 fused_ordering(158) 00:13:01.772 fused_ordering(159) 00:13:01.772 fused_ordering(160) 00:13:01.772 fused_ordering(161) 00:13:01.772 fused_ordering(162) 00:13:01.772 fused_ordering(163) 00:13:01.772 fused_ordering(164) 00:13:01.772 fused_ordering(165) 00:13:01.772 fused_ordering(166) 00:13:01.772 fused_ordering(167) 00:13:01.772 fused_ordering(168) 00:13:01.772 fused_ordering(169) 00:13:01.772 fused_ordering(170) 00:13:01.772 fused_ordering(171) 00:13:01.772 fused_ordering(172) 00:13:01.773 fused_ordering(173) 00:13:01.773 fused_ordering(174) 00:13:01.773 fused_ordering(175) 00:13:01.773 fused_ordering(176) 00:13:01.773 fused_ordering(177) 00:13:01.773 fused_ordering(178) 00:13:01.773 fused_ordering(179) 00:13:01.773 fused_ordering(180) 00:13:01.773 fused_ordering(181) 00:13:01.773 fused_ordering(182) 00:13:01.773 fused_ordering(183) 00:13:01.773 fused_ordering(184) 00:13:01.773 fused_ordering(185) 00:13:01.773 fused_ordering(186) 00:13:01.773 fused_ordering(187) 00:13:01.773 fused_ordering(188) 00:13:01.773 fused_ordering(189) 00:13:01.773 fused_ordering(190) 00:13:01.773 fused_ordering(191) 00:13:01.773 fused_ordering(192) 00:13:01.773 fused_ordering(193) 00:13:01.773 fused_ordering(194) 00:13:01.773 fused_ordering(195) 00:13:01.773 fused_ordering(196) 00:13:01.773 fused_ordering(197) 00:13:01.773 fused_ordering(198) 00:13:01.773 fused_ordering(199) 00:13:01.773 fused_ordering(200) 00:13:01.773 fused_ordering(201) 00:13:01.773 fused_ordering(202) 00:13:01.773 fused_ordering(203) 00:13:01.773 fused_ordering(204) 00:13:01.773 fused_ordering(205) 00:13:02.337 fused_ordering(206) 00:13:02.337 fused_ordering(207) 00:13:02.337 fused_ordering(208) 00:13:02.337 fused_ordering(209) 00:13:02.337 fused_ordering(210) 00:13:02.337 fused_ordering(211) 00:13:02.337 fused_ordering(212) 00:13:02.337 fused_ordering(213) 00:13:02.337 fused_ordering(214) 00:13:02.337 fused_ordering(215) 00:13:02.337 fused_ordering(216) 00:13:02.337 fused_ordering(217) 00:13:02.337 fused_ordering(218) 00:13:02.337 fused_ordering(219) 00:13:02.337 fused_ordering(220) 00:13:02.337 fused_ordering(221) 00:13:02.337 fused_ordering(222) 00:13:02.337 fused_ordering(223) 00:13:02.337 fused_ordering(224) 00:13:02.337 fused_ordering(225) 00:13:02.337 fused_ordering(226) 00:13:02.337 fused_ordering(227) 00:13:02.337 fused_ordering(228) 00:13:02.337 fused_ordering(229) 00:13:02.337 fused_ordering(230) 00:13:02.337 fused_ordering(231) 00:13:02.337 fused_ordering(232) 00:13:02.337 fused_ordering(233) 00:13:02.337 fused_ordering(234) 00:13:02.337 fused_ordering(235) 00:13:02.337 fused_ordering(236) 00:13:02.337 fused_ordering(237) 00:13:02.337 fused_ordering(238) 00:13:02.337 fused_ordering(239) 00:13:02.337 fused_ordering(240) 00:13:02.337 fused_ordering(241) 00:13:02.337 fused_ordering(242) 00:13:02.337 fused_ordering(243) 00:13:02.337 fused_ordering(244) 00:13:02.337 fused_ordering(245) 00:13:02.337 fused_ordering(246) 00:13:02.337 fused_ordering(247) 00:13:02.337 fused_ordering(248) 00:13:02.337 fused_ordering(249) 00:13:02.337 fused_ordering(250) 00:13:02.338 fused_ordering(251) 00:13:02.338 fused_ordering(252) 00:13:02.338 fused_ordering(253) 00:13:02.338 fused_ordering(254) 00:13:02.338 fused_ordering(255) 00:13:02.338 fused_ordering(256) 00:13:02.338 fused_ordering(257) 00:13:02.338 fused_ordering(258) 00:13:02.338 fused_ordering(259) 00:13:02.338 fused_ordering(260) 00:13:02.338 fused_ordering(261) 00:13:02.338 fused_ordering(262) 00:13:02.338 fused_ordering(263) 00:13:02.338 fused_ordering(264) 00:13:02.338 fused_ordering(265) 00:13:02.338 fused_ordering(266) 00:13:02.338 fused_ordering(267) 00:13:02.338 fused_ordering(268) 00:13:02.338 fused_ordering(269) 00:13:02.338 fused_ordering(270) 00:13:02.338 fused_ordering(271) 00:13:02.338 fused_ordering(272) 00:13:02.338 fused_ordering(273) 00:13:02.338 fused_ordering(274) 00:13:02.338 fused_ordering(275) 00:13:02.338 fused_ordering(276) 00:13:02.338 fused_ordering(277) 00:13:02.338 fused_ordering(278) 00:13:02.338 fused_ordering(279) 00:13:02.338 fused_ordering(280) 00:13:02.338 fused_ordering(281) 00:13:02.338 fused_ordering(282) 00:13:02.338 fused_ordering(283) 00:13:02.338 fused_ordering(284) 00:13:02.338 fused_ordering(285) 00:13:02.338 fused_ordering(286) 00:13:02.338 fused_ordering(287) 00:13:02.338 fused_ordering(288) 00:13:02.338 fused_ordering(289) 00:13:02.338 fused_ordering(290) 00:13:02.338 fused_ordering(291) 00:13:02.338 fused_ordering(292) 00:13:02.338 fused_ordering(293) 00:13:02.338 fused_ordering(294) 00:13:02.338 fused_ordering(295) 00:13:02.338 fused_ordering(296) 00:13:02.338 fused_ordering(297) 00:13:02.338 fused_ordering(298) 00:13:02.338 fused_ordering(299) 00:13:02.338 fused_ordering(300) 00:13:02.338 fused_ordering(301) 00:13:02.338 fused_ordering(302) 00:13:02.338 fused_ordering(303) 00:13:02.338 fused_ordering(304) 00:13:02.338 fused_ordering(305) 00:13:02.338 fused_ordering(306) 00:13:02.338 fused_ordering(307) 00:13:02.338 fused_ordering(308) 00:13:02.338 fused_ordering(309) 00:13:02.338 fused_ordering(310) 00:13:02.338 fused_ordering(311) 00:13:02.338 fused_ordering(312) 00:13:02.338 fused_ordering(313) 00:13:02.338 fused_ordering(314) 00:13:02.338 fused_ordering(315) 00:13:02.338 fused_ordering(316) 00:13:02.338 fused_ordering(317) 00:13:02.338 fused_ordering(318) 00:13:02.338 fused_ordering(319) 00:13:02.338 fused_ordering(320) 00:13:02.338 fused_ordering(321) 00:13:02.338 fused_ordering(322) 00:13:02.338 fused_ordering(323) 00:13:02.338 fused_ordering(324) 00:13:02.338 fused_ordering(325) 00:13:02.338 fused_ordering(326) 00:13:02.338 fused_ordering(327) 00:13:02.338 fused_ordering(328) 00:13:02.338 fused_ordering(329) 00:13:02.338 fused_ordering(330) 00:13:02.338 fused_ordering(331) 00:13:02.338 fused_ordering(332) 00:13:02.338 fused_ordering(333) 00:13:02.338 fused_ordering(334) 00:13:02.338 fused_ordering(335) 00:13:02.338 fused_ordering(336) 00:13:02.338 fused_ordering(337) 00:13:02.338 fused_ordering(338) 00:13:02.338 fused_ordering(339) 00:13:02.338 fused_ordering(340) 00:13:02.338 fused_ordering(341) 00:13:02.338 fused_ordering(342) 00:13:02.338 fused_ordering(343) 00:13:02.338 fused_ordering(344) 00:13:02.338 fused_ordering(345) 00:13:02.338 fused_ordering(346) 00:13:02.338 fused_ordering(347) 00:13:02.338 fused_ordering(348) 00:13:02.338 fused_ordering(349) 00:13:02.338 fused_ordering(350) 00:13:02.338 fused_ordering(351) 00:13:02.338 fused_ordering(352) 00:13:02.338 fused_ordering(353) 00:13:02.338 fused_ordering(354) 00:13:02.338 fused_ordering(355) 00:13:02.338 fused_ordering(356) 00:13:02.338 fused_ordering(357) 00:13:02.338 fused_ordering(358) 00:13:02.338 fused_ordering(359) 00:13:02.338 fused_ordering(360) 00:13:02.338 fused_ordering(361) 00:13:02.338 fused_ordering(362) 00:13:02.338 fused_ordering(363) 00:13:02.338 fused_ordering(364) 00:13:02.338 fused_ordering(365) 00:13:02.338 fused_ordering(366) 00:13:02.338 fused_ordering(367) 00:13:02.338 fused_ordering(368) 00:13:02.338 fused_ordering(369) 00:13:02.338 fused_ordering(370) 00:13:02.338 fused_ordering(371) 00:13:02.338 fused_ordering(372) 00:13:02.338 fused_ordering(373) 00:13:02.338 fused_ordering(374) 00:13:02.338 fused_ordering(375) 00:13:02.338 fused_ordering(376) 00:13:02.338 fused_ordering(377) 00:13:02.338 fused_ordering(378) 00:13:02.338 fused_ordering(379) 00:13:02.338 fused_ordering(380) 00:13:02.338 fused_ordering(381) 00:13:02.338 fused_ordering(382) 00:13:02.338 fused_ordering(383) 00:13:02.338 fused_ordering(384) 00:13:02.338 fused_ordering(385) 00:13:02.338 fused_ordering(386) 00:13:02.338 fused_ordering(387) 00:13:02.338 fused_ordering(388) 00:13:02.338 fused_ordering(389) 00:13:02.338 fused_ordering(390) 00:13:02.338 fused_ordering(391) 00:13:02.338 fused_ordering(392) 00:13:02.338 fused_ordering(393) 00:13:02.338 fused_ordering(394) 00:13:02.338 fused_ordering(395) 00:13:02.338 fused_ordering(396) 00:13:02.338 fused_ordering(397) 00:13:02.338 fused_ordering(398) 00:13:02.338 fused_ordering(399) 00:13:02.338 fused_ordering(400) 00:13:02.338 fused_ordering(401) 00:13:02.338 fused_ordering(402) 00:13:02.338 fused_ordering(403) 00:13:02.338 fused_ordering(404) 00:13:02.338 fused_ordering(405) 00:13:02.338 fused_ordering(406) 00:13:02.338 fused_ordering(407) 00:13:02.338 fused_ordering(408) 00:13:02.338 fused_ordering(409) 00:13:02.338 fused_ordering(410) 00:13:02.902 fused_ordering(411) 00:13:02.902 fused_ordering(412) 00:13:02.902 fused_ordering(413) 00:13:02.902 fused_ordering(414) 00:13:02.902 fused_ordering(415) 00:13:02.902 fused_ordering(416) 00:13:02.902 fused_ordering(417) 00:13:02.902 fused_ordering(418) 00:13:02.902 fused_ordering(419) 00:13:02.902 fused_ordering(420) 00:13:02.902 fused_ordering(421) 00:13:02.902 fused_ordering(422) 00:13:02.902 fused_ordering(423) 00:13:02.902 fused_ordering(424) 00:13:02.902 fused_ordering(425) 00:13:02.902 fused_ordering(426) 00:13:02.902 fused_ordering(427) 00:13:02.902 fused_ordering(428) 00:13:02.902 fused_ordering(429) 00:13:02.902 fused_ordering(430) 00:13:02.902 fused_ordering(431) 00:13:02.902 fused_ordering(432) 00:13:02.902 fused_ordering(433) 00:13:02.902 fused_ordering(434) 00:13:02.902 fused_ordering(435) 00:13:02.902 fused_ordering(436) 00:13:02.902 fused_ordering(437) 00:13:02.902 fused_ordering(438) 00:13:02.902 fused_ordering(439) 00:13:02.902 fused_ordering(440) 00:13:02.902 fused_ordering(441) 00:13:02.902 fused_ordering(442) 00:13:02.902 fused_ordering(443) 00:13:02.902 fused_ordering(444) 00:13:02.902 fused_ordering(445) 00:13:02.902 fused_ordering(446) 00:13:02.902 fused_ordering(447) 00:13:02.902 fused_ordering(448) 00:13:02.902 fused_ordering(449) 00:13:02.902 fused_ordering(450) 00:13:02.902 fused_ordering(451) 00:13:02.902 fused_ordering(452) 00:13:02.902 fused_ordering(453) 00:13:02.902 fused_ordering(454) 00:13:02.902 fused_ordering(455) 00:13:02.902 fused_ordering(456) 00:13:02.902 fused_ordering(457) 00:13:02.902 fused_ordering(458) 00:13:02.902 fused_ordering(459) 00:13:02.902 fused_ordering(460) 00:13:02.902 fused_ordering(461) 00:13:02.902 fused_ordering(462) 00:13:02.902 fused_ordering(463) 00:13:02.902 fused_ordering(464) 00:13:02.902 fused_ordering(465) 00:13:02.902 fused_ordering(466) 00:13:02.902 fused_ordering(467) 00:13:02.902 fused_ordering(468) 00:13:02.902 fused_ordering(469) 00:13:02.902 fused_ordering(470) 00:13:02.902 fused_ordering(471) 00:13:02.902 fused_ordering(472) 00:13:02.902 fused_ordering(473) 00:13:02.902 fused_ordering(474) 00:13:02.902 fused_ordering(475) 00:13:02.902 fused_ordering(476) 00:13:02.902 fused_ordering(477) 00:13:02.902 fused_ordering(478) 00:13:02.902 fused_ordering(479) 00:13:02.902 fused_ordering(480) 00:13:02.902 fused_ordering(481) 00:13:02.902 fused_ordering(482) 00:13:02.902 fused_ordering(483) 00:13:02.902 fused_ordering(484) 00:13:02.902 fused_ordering(485) 00:13:02.902 fused_ordering(486) 00:13:02.902 fused_ordering(487) 00:13:02.902 fused_ordering(488) 00:13:02.902 fused_ordering(489) 00:13:02.902 fused_ordering(490) 00:13:02.902 fused_ordering(491) 00:13:02.902 fused_ordering(492) 00:13:02.902 fused_ordering(493) 00:13:02.902 fused_ordering(494) 00:13:02.902 fused_ordering(495) 00:13:02.902 fused_ordering(496) 00:13:02.902 fused_ordering(497) 00:13:02.902 fused_ordering(498) 00:13:02.902 fused_ordering(499) 00:13:02.902 fused_ordering(500) 00:13:02.902 fused_ordering(501) 00:13:02.902 fused_ordering(502) 00:13:02.902 fused_ordering(503) 00:13:02.902 fused_ordering(504) 00:13:02.902 fused_ordering(505) 00:13:02.902 fused_ordering(506) 00:13:02.902 fused_ordering(507) 00:13:02.902 fused_ordering(508) 00:13:02.902 fused_ordering(509) 00:13:02.902 fused_ordering(510) 00:13:02.902 fused_ordering(511) 00:13:02.902 fused_ordering(512) 00:13:02.902 fused_ordering(513) 00:13:02.902 fused_ordering(514) 00:13:02.902 fused_ordering(515) 00:13:02.902 fused_ordering(516) 00:13:02.902 fused_ordering(517) 00:13:02.902 fused_ordering(518) 00:13:02.902 fused_ordering(519) 00:13:02.902 fused_ordering(520) 00:13:02.902 fused_ordering(521) 00:13:02.902 fused_ordering(522) 00:13:02.902 fused_ordering(523) 00:13:02.902 fused_ordering(524) 00:13:02.902 fused_ordering(525) 00:13:02.902 fused_ordering(526) 00:13:02.902 fused_ordering(527) 00:13:02.902 fused_ordering(528) 00:13:02.902 fused_ordering(529) 00:13:02.902 fused_ordering(530) 00:13:02.902 fused_ordering(531) 00:13:02.902 fused_ordering(532) 00:13:02.902 fused_ordering(533) 00:13:02.902 fused_ordering(534) 00:13:02.902 fused_ordering(535) 00:13:02.902 fused_ordering(536) 00:13:02.902 fused_ordering(537) 00:13:02.902 fused_ordering(538) 00:13:02.902 fused_ordering(539) 00:13:02.902 fused_ordering(540) 00:13:02.902 fused_ordering(541) 00:13:02.902 fused_ordering(542) 00:13:02.902 fused_ordering(543) 00:13:02.902 fused_ordering(544) 00:13:02.902 fused_ordering(545) 00:13:02.902 fused_ordering(546) 00:13:02.902 fused_ordering(547) 00:13:02.902 fused_ordering(548) 00:13:02.902 fused_ordering(549) 00:13:02.902 fused_ordering(550) 00:13:02.902 fused_ordering(551) 00:13:02.902 fused_ordering(552) 00:13:02.902 fused_ordering(553) 00:13:02.902 fused_ordering(554) 00:13:02.902 fused_ordering(555) 00:13:02.902 fused_ordering(556) 00:13:02.902 fused_ordering(557) 00:13:02.902 fused_ordering(558) 00:13:02.902 fused_ordering(559) 00:13:02.902 fused_ordering(560) 00:13:02.902 fused_ordering(561) 00:13:02.902 fused_ordering(562) 00:13:02.902 fused_ordering(563) 00:13:02.902 fused_ordering(564) 00:13:02.902 fused_ordering(565) 00:13:02.902 fused_ordering(566) 00:13:02.902 fused_ordering(567) 00:13:02.902 fused_ordering(568) 00:13:02.902 fused_ordering(569) 00:13:02.902 fused_ordering(570) 00:13:02.902 fused_ordering(571) 00:13:02.902 fused_ordering(572) 00:13:02.902 fused_ordering(573) 00:13:02.902 fused_ordering(574) 00:13:02.902 fused_ordering(575) 00:13:02.902 fused_ordering(576) 00:13:02.902 fused_ordering(577) 00:13:02.902 fused_ordering(578) 00:13:02.902 fused_ordering(579) 00:13:02.902 fused_ordering(580) 00:13:02.902 fused_ordering(581) 00:13:02.902 fused_ordering(582) 00:13:02.902 fused_ordering(583) 00:13:02.902 fused_ordering(584) 00:13:02.902 fused_ordering(585) 00:13:02.902 fused_ordering(586) 00:13:02.902 fused_ordering(587) 00:13:02.902 fused_ordering(588) 00:13:02.902 fused_ordering(589) 00:13:02.902 fused_ordering(590) 00:13:02.902 fused_ordering(591) 00:13:02.902 fused_ordering(592) 00:13:02.902 fused_ordering(593) 00:13:02.902 fused_ordering(594) 00:13:02.902 fused_ordering(595) 00:13:02.902 fused_ordering(596) 00:13:02.902 fused_ordering(597) 00:13:02.902 fused_ordering(598) 00:13:02.902 fused_ordering(599) 00:13:02.902 fused_ordering(600) 00:13:02.902 fused_ordering(601) 00:13:02.902 fused_ordering(602) 00:13:02.902 fused_ordering(603) 00:13:02.902 fused_ordering(604) 00:13:02.902 fused_ordering(605) 00:13:02.902 fused_ordering(606) 00:13:02.902 fused_ordering(607) 00:13:02.902 fused_ordering(608) 00:13:02.902 fused_ordering(609) 00:13:02.902 fused_ordering(610) 00:13:02.902 fused_ordering(611) 00:13:02.902 fused_ordering(612) 00:13:02.902 fused_ordering(613) 00:13:02.902 fused_ordering(614) 00:13:02.902 fused_ordering(615) 00:13:03.834 fused_ordering(616) 00:13:03.834 fused_ordering(617) 00:13:03.834 fused_ordering(618) 00:13:03.834 fused_ordering(619) 00:13:03.834 fused_ordering(620) 00:13:03.834 fused_ordering(621) 00:13:03.834 fused_ordering(622) 00:13:03.834 fused_ordering(623) 00:13:03.834 fused_ordering(624) 00:13:03.834 fused_ordering(625) 00:13:03.834 fused_ordering(626) 00:13:03.834 fused_ordering(627) 00:13:03.834 fused_ordering(628) 00:13:03.834 fused_ordering(629) 00:13:03.834 fused_ordering(630) 00:13:03.834 fused_ordering(631) 00:13:03.834 fused_ordering(632) 00:13:03.834 fused_ordering(633) 00:13:03.834 fused_ordering(634) 00:13:03.834 fused_ordering(635) 00:13:03.834 fused_ordering(636) 00:13:03.834 fused_ordering(637) 00:13:03.834 fused_ordering(638) 00:13:03.834 fused_ordering(639) 00:13:03.834 fused_ordering(640) 00:13:03.834 fused_ordering(641) 00:13:03.834 fused_ordering(642) 00:13:03.834 fused_ordering(643) 00:13:03.834 fused_ordering(644) 00:13:03.834 fused_ordering(645) 00:13:03.834 fused_ordering(646) 00:13:03.834 fused_ordering(647) 00:13:03.834 fused_ordering(648) 00:13:03.834 fused_ordering(649) 00:13:03.834 fused_ordering(650) 00:13:03.834 fused_ordering(651) 00:13:03.834 fused_ordering(652) 00:13:03.834 fused_ordering(653) 00:13:03.834 fused_ordering(654) 00:13:03.834 fused_ordering(655) 00:13:03.834 fused_ordering(656) 00:13:03.834 fused_ordering(657) 00:13:03.834 fused_ordering(658) 00:13:03.834 fused_ordering(659) 00:13:03.834 fused_ordering(660) 00:13:03.834 fused_ordering(661) 00:13:03.834 fused_ordering(662) 00:13:03.834 fused_ordering(663) 00:13:03.834 fused_ordering(664) 00:13:03.834 fused_ordering(665) 00:13:03.834 fused_ordering(666) 00:13:03.834 fused_ordering(667) 00:13:03.834 fused_ordering(668) 00:13:03.834 fused_ordering(669) 00:13:03.834 fused_ordering(670) 00:13:03.834 fused_ordering(671) 00:13:03.834 fused_ordering(672) 00:13:03.834 fused_ordering(673) 00:13:03.834 fused_ordering(674) 00:13:03.834 fused_ordering(675) 00:13:03.834 fused_ordering(676) 00:13:03.834 fused_ordering(677) 00:13:03.834 fused_ordering(678) 00:13:03.834 fused_ordering(679) 00:13:03.834 fused_ordering(680) 00:13:03.834 fused_ordering(681) 00:13:03.834 fused_ordering(682) 00:13:03.834 fused_ordering(683) 00:13:03.834 fused_ordering(684) 00:13:03.834 fused_ordering(685) 00:13:03.834 fused_ordering(686) 00:13:03.834 fused_ordering(687) 00:13:03.834 fused_ordering(688) 00:13:03.834 fused_ordering(689) 00:13:03.834 fused_ordering(690) 00:13:03.834 fused_ordering(691) 00:13:03.834 fused_ordering(692) 00:13:03.834 fused_ordering(693) 00:13:03.834 fused_ordering(694) 00:13:03.834 fused_ordering(695) 00:13:03.834 fused_ordering(696) 00:13:03.834 fused_ordering(697) 00:13:03.834 fused_ordering(698) 00:13:03.834 fused_ordering(699) 00:13:03.834 fused_ordering(700) 00:13:03.834 fused_ordering(701) 00:13:03.834 fused_ordering(702) 00:13:03.834 fused_ordering(703) 00:13:03.834 fused_ordering(704) 00:13:03.834 fused_ordering(705) 00:13:03.834 fused_ordering(706) 00:13:03.834 fused_ordering(707) 00:13:03.834 fused_ordering(708) 00:13:03.834 fused_ordering(709) 00:13:03.834 fused_ordering(710) 00:13:03.834 fused_ordering(711) 00:13:03.834 fused_ordering(712) 00:13:03.834 fused_ordering(713) 00:13:03.834 fused_ordering(714) 00:13:03.835 fused_ordering(715) 00:13:03.835 fused_ordering(716) 00:13:03.835 fused_ordering(717) 00:13:03.835 fused_ordering(718) 00:13:03.835 fused_ordering(719) 00:13:03.835 fused_ordering(720) 00:13:03.835 fused_ordering(721) 00:13:03.835 fused_ordering(722) 00:13:03.835 fused_ordering(723) 00:13:03.835 fused_ordering(724) 00:13:03.835 fused_ordering(725) 00:13:03.835 fused_ordering(726) 00:13:03.835 fused_ordering(727) 00:13:03.835 fused_ordering(728) 00:13:03.835 fused_ordering(729) 00:13:03.835 fused_ordering(730) 00:13:03.835 fused_ordering(731) 00:13:03.835 fused_ordering(732) 00:13:03.835 fused_ordering(733) 00:13:03.835 fused_ordering(734) 00:13:03.835 fused_ordering(735) 00:13:03.835 fused_ordering(736) 00:13:03.835 fused_ordering(737) 00:13:03.835 fused_ordering(738) 00:13:03.835 fused_ordering(739) 00:13:03.835 fused_ordering(740) 00:13:03.835 fused_ordering(741) 00:13:03.835 fused_ordering(742) 00:13:03.835 fused_ordering(743) 00:13:03.835 fused_ordering(744) 00:13:03.835 fused_ordering(745) 00:13:03.835 fused_ordering(746) 00:13:03.835 fused_ordering(747) 00:13:03.835 fused_ordering(748) 00:13:03.835 fused_ordering(749) 00:13:03.835 fused_ordering(750) 00:13:03.835 fused_ordering(751) 00:13:03.835 fused_ordering(752) 00:13:03.835 fused_ordering(753) 00:13:03.835 fused_ordering(754) 00:13:03.835 fused_ordering(755) 00:13:03.835 fused_ordering(756) 00:13:03.835 fused_ordering(757) 00:13:03.835 fused_ordering(758) 00:13:03.835 fused_ordering(759) 00:13:03.835 fused_ordering(760) 00:13:03.835 fused_ordering(761) 00:13:03.835 fused_ordering(762) 00:13:03.835 fused_ordering(763) 00:13:03.835 fused_ordering(764) 00:13:03.835 fused_ordering(765) 00:13:03.835 fused_ordering(766) 00:13:03.835 fused_ordering(767) 00:13:03.835 fused_ordering(768) 00:13:03.835 fused_ordering(769) 00:13:03.835 fused_ordering(770) 00:13:03.835 fused_ordering(771) 00:13:03.835 fused_ordering(772) 00:13:03.835 fused_ordering(773) 00:13:03.835 fused_ordering(774) 00:13:03.835 fused_ordering(775) 00:13:03.835 fused_ordering(776) 00:13:03.835 fused_ordering(777) 00:13:03.835 fused_ordering(778) 00:13:03.835 fused_ordering(779) 00:13:03.835 fused_ordering(780) 00:13:03.835 fused_ordering(781) 00:13:03.835 fused_ordering(782) 00:13:03.835 fused_ordering(783) 00:13:03.835 fused_ordering(784) 00:13:03.835 fused_ordering(785) 00:13:03.835 fused_ordering(786) 00:13:03.835 fused_ordering(787) 00:13:03.835 fused_ordering(788) 00:13:03.835 fused_ordering(789) 00:13:03.835 fused_ordering(790) 00:13:03.835 fused_ordering(791) 00:13:03.835 fused_ordering(792) 00:13:03.835 fused_ordering(793) 00:13:03.835 fused_ordering(794) 00:13:03.835 fused_ordering(795) 00:13:03.835 fused_ordering(796) 00:13:03.835 fused_ordering(797) 00:13:03.835 fused_ordering(798) 00:13:03.835 fused_ordering(799) 00:13:03.835 fused_ordering(800) 00:13:03.835 fused_ordering(801) 00:13:03.835 fused_ordering(802) 00:13:03.835 fused_ordering(803) 00:13:03.835 fused_ordering(804) 00:13:03.835 fused_ordering(805) 00:13:03.835 fused_ordering(806) 00:13:03.835 fused_ordering(807) 00:13:03.835 fused_ordering(808) 00:13:03.835 fused_ordering(809) 00:13:03.835 fused_ordering(810) 00:13:03.835 fused_ordering(811) 00:13:03.835 fused_ordering(812) 00:13:03.835 fused_ordering(813) 00:13:03.835 fused_ordering(814) 00:13:03.835 fused_ordering(815) 00:13:03.835 fused_ordering(816) 00:13:03.835 fused_ordering(817) 00:13:03.835 fused_ordering(818) 00:13:03.835 fused_ordering(819) 00:13:03.835 fused_ordering(820) 00:13:04.402 fused_ordering(821) 00:13:04.402 fused_ordering(822) 00:13:04.402 fused_ordering(823) 00:13:04.402 fused_ordering(824) 00:13:04.402 fused_ordering(825) 00:13:04.402 fused_ordering(826) 00:13:04.402 fused_ordering(827) 00:13:04.402 fused_ordering(828) 00:13:04.402 fused_ordering(829) 00:13:04.402 fused_ordering(830) 00:13:04.402 fused_ordering(831) 00:13:04.402 fused_ordering(832) 00:13:04.402 fused_ordering(833) 00:13:04.402 fused_ordering(834) 00:13:04.402 fused_ordering(835) 00:13:04.402 fused_ordering(836) 00:13:04.402 fused_ordering(837) 00:13:04.402 fused_ordering(838) 00:13:04.402 fused_ordering(839) 00:13:04.402 fused_ordering(840) 00:13:04.402 fused_ordering(841) 00:13:04.402 fused_ordering(842) 00:13:04.402 fused_ordering(843) 00:13:04.402 fused_ordering(844) 00:13:04.402 fused_ordering(845) 00:13:04.402 fused_ordering(846) 00:13:04.402 fused_ordering(847) 00:13:04.402 fused_ordering(848) 00:13:04.402 fused_ordering(849) 00:13:04.402 fused_ordering(850) 00:13:04.402 fused_ordering(851) 00:13:04.402 fused_ordering(852) 00:13:04.402 fused_ordering(853) 00:13:04.402 fused_ordering(854) 00:13:04.402 fused_ordering(855) 00:13:04.402 fused_ordering(856) 00:13:04.402 fused_ordering(857) 00:13:04.402 fused_ordering(858) 00:13:04.402 fused_ordering(859) 00:13:04.402 fused_ordering(860) 00:13:04.402 fused_ordering(861) 00:13:04.402 fused_ordering(862) 00:13:04.402 fused_ordering(863) 00:13:04.402 fused_ordering(864) 00:13:04.402 fused_ordering(865) 00:13:04.402 fused_ordering(866) 00:13:04.402 fused_ordering(867) 00:13:04.402 fused_ordering(868) 00:13:04.402 fused_ordering(869) 00:13:04.402 fused_ordering(870) 00:13:04.402 fused_ordering(871) 00:13:04.402 fused_ordering(872) 00:13:04.402 fused_ordering(873) 00:13:04.402 fused_ordering(874) 00:13:04.402 fused_ordering(875) 00:13:04.402 fused_ordering(876) 00:13:04.402 fused_ordering(877) 00:13:04.402 fused_ordering(878) 00:13:04.402 fused_ordering(879) 00:13:04.402 fused_ordering(880) 00:13:04.402 fused_ordering(881) 00:13:04.402 fused_ordering(882) 00:13:04.402 fused_ordering(883) 00:13:04.402 fused_ordering(884) 00:13:04.402 fused_ordering(885) 00:13:04.402 fused_ordering(886) 00:13:04.402 fused_ordering(887) 00:13:04.402 fused_ordering(888) 00:13:04.402 fused_ordering(889) 00:13:04.402 fused_ordering(890) 00:13:04.402 fused_ordering(891) 00:13:04.402 fused_ordering(892) 00:13:04.402 fused_ordering(893) 00:13:04.402 fused_ordering(894) 00:13:04.402 fused_ordering(895) 00:13:04.402 fused_ordering(896) 00:13:04.402 fused_ordering(897) 00:13:04.402 fused_ordering(898) 00:13:04.402 fused_ordering(899) 00:13:04.402 fused_ordering(900) 00:13:04.402 fused_ordering(901) 00:13:04.402 fused_ordering(902) 00:13:04.402 fused_ordering(903) 00:13:04.402 fused_ordering(904) 00:13:04.402 fused_ordering(905) 00:13:04.402 fused_ordering(906) 00:13:04.402 fused_ordering(907) 00:13:04.402 fused_ordering(908) 00:13:04.402 fused_ordering(909) 00:13:04.402 fused_ordering(910) 00:13:04.402 fused_ordering(911) 00:13:04.402 fused_ordering(912) 00:13:04.402 fused_ordering(913) 00:13:04.402 fused_ordering(914) 00:13:04.402 fused_ordering(915) 00:13:04.402 fused_ordering(916) 00:13:04.402 fused_ordering(917) 00:13:04.402 fused_ordering(918) 00:13:04.402 fused_ordering(919) 00:13:04.402 fused_ordering(920) 00:13:04.402 fused_ordering(921) 00:13:04.402 fused_ordering(922) 00:13:04.402 fused_ordering(923) 00:13:04.402 fused_ordering(924) 00:13:04.402 fused_ordering(925) 00:13:04.402 fused_ordering(926) 00:13:04.402 fused_ordering(927) 00:13:04.402 fused_ordering(928) 00:13:04.402 fused_ordering(929) 00:13:04.402 fused_ordering(930) 00:13:04.402 fused_ordering(931) 00:13:04.402 fused_ordering(932) 00:13:04.402 fused_ordering(933) 00:13:04.402 fused_ordering(934) 00:13:04.402 fused_ordering(935) 00:13:04.402 fused_ordering(936) 00:13:04.402 fused_ordering(937) 00:13:04.402 fused_ordering(938) 00:13:04.402 fused_ordering(939) 00:13:04.402 fused_ordering(940) 00:13:04.402 fused_ordering(941) 00:13:04.402 fused_ordering(942) 00:13:04.402 fused_ordering(943) 00:13:04.402 fused_ordering(944) 00:13:04.402 fused_ordering(945) 00:13:04.402 fused_ordering(946) 00:13:04.402 fused_ordering(947) 00:13:04.402 fused_ordering(948) 00:13:04.402 fused_ordering(949) 00:13:04.402 fused_ordering(950) 00:13:04.403 fused_ordering(951) 00:13:04.403 fused_ordering(952) 00:13:04.403 fused_ordering(953) 00:13:04.403 fused_ordering(954) 00:13:04.403 fused_ordering(955) 00:13:04.403 fused_ordering(956) 00:13:04.403 fused_ordering(957) 00:13:04.403 fused_ordering(958) 00:13:04.403 fused_ordering(959) 00:13:04.403 fused_ordering(960) 00:13:04.403 fused_ordering(961) 00:13:04.403 fused_ordering(962) 00:13:04.403 fused_ordering(963) 00:13:04.403 fused_ordering(964) 00:13:04.403 fused_ordering(965) 00:13:04.403 fused_ordering(966) 00:13:04.403 fused_ordering(967) 00:13:04.403 fused_ordering(968) 00:13:04.403 fused_ordering(969) 00:13:04.403 fused_ordering(970) 00:13:04.403 fused_ordering(971) 00:13:04.403 fused_ordering(972) 00:13:04.403 fused_ordering(973) 00:13:04.403 fused_ordering(974) 00:13:04.403 fused_ordering(975) 00:13:04.403 fused_ordering(976) 00:13:04.403 fused_ordering(977) 00:13:04.403 fused_ordering(978) 00:13:04.403 fused_ordering(979) 00:13:04.403 fused_ordering(980) 00:13:04.403 fused_ordering(981) 00:13:04.403 fused_ordering(982) 00:13:04.403 fused_ordering(983) 00:13:04.403 fused_ordering(984) 00:13:04.403 fused_ordering(985) 00:13:04.403 fused_ordering(986) 00:13:04.403 fused_ordering(987) 00:13:04.403 fused_ordering(988) 00:13:04.403 fused_ordering(989) 00:13:04.403 fused_ordering(990) 00:13:04.403 fused_ordering(991) 00:13:04.403 fused_ordering(992) 00:13:04.403 fused_ordering(993) 00:13:04.403 fused_ordering(994) 00:13:04.403 fused_ordering(995) 00:13:04.403 fused_ordering(996) 00:13:04.403 fused_ordering(997) 00:13:04.403 fused_ordering(998) 00:13:04.403 fused_ordering(999) 00:13:04.403 fused_ordering(1000) 00:13:04.403 fused_ordering(1001) 00:13:04.403 fused_ordering(1002) 00:13:04.403 fused_ordering(1003) 00:13:04.403 fused_ordering(1004) 00:13:04.403 fused_ordering(1005) 00:13:04.403 fused_ordering(1006) 00:13:04.403 fused_ordering(1007) 00:13:04.403 fused_ordering(1008) 00:13:04.403 fused_ordering(1009) 00:13:04.403 fused_ordering(1010) 00:13:04.403 fused_ordering(1011) 00:13:04.403 fused_ordering(1012) 00:13:04.403 fused_ordering(1013) 00:13:04.403 fused_ordering(1014) 00:13:04.403 fused_ordering(1015) 00:13:04.403 fused_ordering(1016) 00:13:04.403 fused_ordering(1017) 00:13:04.403 fused_ordering(1018) 00:13:04.403 fused_ordering(1019) 00:13:04.403 fused_ordering(1020) 00:13:04.403 fused_ordering(1021) 00:13:04.403 fused_ordering(1022) 00:13:04.403 fused_ordering(1023) 00:13:04.403 15:36:43 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:04.403 15:36:43 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:04.403 15:36:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:04.403 15:36:43 -- nvmf/common.sh@116 -- # sync 00:13:04.403 15:36:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:04.403 15:36:43 -- nvmf/common.sh@119 -- # set +e 00:13:04.403 15:36:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:04.403 15:36:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:04.403 rmmod nvme_tcp 00:13:04.403 rmmod nvme_fabrics 00:13:04.661 rmmod nvme_keyring 00:13:04.661 15:36:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:04.661 15:36:43 -- nvmf/common.sh@123 -- # set -e 00:13:04.661 15:36:43 -- nvmf/common.sh@124 -- # return 0 00:13:04.661 15:36:43 -- nvmf/common.sh@477 -- # '[' -n 2080856 ']' 00:13:04.661 15:36:43 -- nvmf/common.sh@478 -- # killprocess 2080856 00:13:04.661 15:36:43 -- common/autotest_common.sh@926 -- # '[' -z 2080856 ']' 00:13:04.661 15:36:43 -- common/autotest_common.sh@930 -- # kill -0 2080856 00:13:04.661 15:36:43 -- common/autotest_common.sh@931 -- # uname 00:13:04.661 15:36:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:04.661 15:36:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2080856 00:13:04.661 15:36:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:04.661 15:36:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:04.661 15:36:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2080856' 00:13:04.661 killing process with pid 2080856 00:13:04.661 15:36:43 -- common/autotest_common.sh@945 -- # kill 2080856 00:13:04.661 15:36:43 -- common/autotest_common.sh@950 -- # wait 2080856 00:13:04.920 15:36:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:04.920 15:36:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:04.920 15:36:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:04.920 15:36:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:04.920 15:36:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:04.920 15:36:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.920 15:36:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.920 15:36:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.823 15:36:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:06.823 00:13:06.823 real 0m8.921s 00:13:06.823 user 0m6.726s 00:13:06.823 sys 0m3.946s 00:13:06.823 15:36:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:06.823 15:36:46 -- common/autotest_common.sh@10 -- # set +x 00:13:06.823 ************************************ 00:13:06.823 END TEST nvmf_fused_ordering 00:13:06.823 ************************************ 00:13:06.823 15:36:46 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:06.823 15:36:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:06.823 15:36:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:06.823 15:36:46 -- common/autotest_common.sh@10 -- # set +x 00:13:06.823 ************************************ 00:13:06.823 START TEST nvmf_delete_subsystem 00:13:06.823 ************************************ 00:13:06.823 15:36:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:07.082 * Looking for test storage... 00:13:07.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.082 15:36:46 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.082 15:36:46 -- nvmf/common.sh@7 -- # uname -s 00:13:07.082 15:36:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.082 15:36:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.082 15:36:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.082 15:36:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.082 15:36:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.082 15:36:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.082 15:36:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.082 15:36:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.082 15:36:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.082 15:36:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.082 15:36:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:07.082 15:36:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:07.082 15:36:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.082 15:36:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.082 15:36:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.082 15:36:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.082 15:36:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.082 15:36:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.082 15:36:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.082 15:36:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.082 15:36:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.082 15:36:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.082 15:36:46 -- paths/export.sh@5 -- # export PATH 00:13:07.082 15:36:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.082 15:36:46 -- nvmf/common.sh@46 -- # : 0 00:13:07.082 15:36:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:07.082 15:36:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:07.082 15:36:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:07.082 15:36:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.082 15:36:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.082 15:36:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:07.082 15:36:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:07.082 15:36:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:07.082 15:36:46 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:07.082 15:36:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:07.082 15:36:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.082 15:36:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:07.082 15:36:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:07.082 15:36:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:07.082 15:36:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.082 15:36:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.082 15:36:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.082 15:36:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:07.082 15:36:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:07.082 15:36:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:07.082 15:36:46 -- common/autotest_common.sh@10 -- # set +x 00:13:08.981 15:36:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:08.981 15:36:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:08.981 15:36:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:08.981 15:36:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:08.981 15:36:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:08.981 15:36:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:08.981 15:36:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:08.981 15:36:48 -- nvmf/common.sh@294 -- # net_devs=() 00:13:08.981 15:36:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:08.981 15:36:48 -- nvmf/common.sh@295 -- # e810=() 00:13:08.981 15:36:48 -- nvmf/common.sh@295 -- # local -ga e810 00:13:08.981 15:36:48 -- nvmf/common.sh@296 -- # x722=() 00:13:08.981 15:36:48 -- nvmf/common.sh@296 -- # local -ga x722 00:13:08.981 15:36:48 -- nvmf/common.sh@297 -- # mlx=() 00:13:08.981 15:36:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:08.981 15:36:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:08.981 15:36:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:08.981 15:36:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:08.981 15:36:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:08.981 15:36:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:08.981 15:36:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:08.981 15:36:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:08.981 15:36:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:08.981 15:36:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:08.981 15:36:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:08.981 15:36:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:08.981 15:36:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:08.981 15:36:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:08.981 15:36:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:08.981 15:36:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:08.981 15:36:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:08.981 15:36:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:08.981 15:36:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:08.981 15:36:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:08.981 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:08.981 15:36:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:08.981 15:36:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:08.981 15:36:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.981 15:36:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.981 15:36:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:08.981 15:36:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:08.981 15:36:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:08.981 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:08.981 15:36:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:08.981 15:36:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:08.981 15:36:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:08.981 15:36:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:08.981 15:36:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:08.981 15:36:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:08.981 15:36:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:08.981 15:36:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:08.981 15:36:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:08.981 15:36:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.981 15:36:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:08.981 15:36:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.981 15:36:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:08.981 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:08.981 15:36:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.981 15:36:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:08.981 15:36:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:08.981 15:36:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:08.981 15:36:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:08.981 15:36:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:08.981 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:08.981 15:36:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:08.981 15:36:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:08.981 15:36:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:08.981 15:36:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:08.981 15:36:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:08.981 15:36:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:08.981 15:36:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:08.981 15:36:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:08.981 15:36:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:08.981 15:36:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:08.982 15:36:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:08.982 15:36:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:08.982 15:36:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:08.982 15:36:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:08.982 15:36:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:08.982 15:36:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:08.982 15:36:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:08.982 15:36:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:08.982 15:36:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:08.982 15:36:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:08.982 15:36:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:08.982 15:36:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:08.982 15:36:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.239 15:36:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.239 15:36:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.240 15:36:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:09.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:13:09.240 00:13:09.240 --- 10.0.0.2 ping statistics --- 00:13:09.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.240 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:13:09.240 15:36:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:13:09.240 00:13:09.240 --- 10.0.0.1 ping statistics --- 00:13:09.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.240 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:13:09.240 15:36:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.240 15:36:48 -- nvmf/common.sh@410 -- # return 0 00:13:09.240 15:36:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:09.240 15:36:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.240 15:36:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:09.240 15:36:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:09.240 15:36:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.240 15:36:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:09.240 15:36:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:09.240 15:36:48 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:09.240 15:36:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:09.240 15:36:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:09.240 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:09.240 15:36:48 -- nvmf/common.sh@469 -- # nvmfpid=2083366 00:13:09.240 15:36:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:09.240 15:36:48 -- nvmf/common.sh@470 -- # waitforlisten 2083366 00:13:09.240 15:36:48 -- common/autotest_common.sh@819 -- # '[' -z 2083366 ']' 00:13:09.240 15:36:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.240 15:36:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:09.240 15:36:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.240 15:36:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:09.240 15:36:48 -- common/autotest_common.sh@10 -- # set +x 00:13:09.240 [2024-07-10 15:36:48.470760] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:09.240 [2024-07-10 15:36:48.470831] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.240 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.240 [2024-07-10 15:36:48.533537] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:09.497 [2024-07-10 15:36:48.644628] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:09.497 [2024-07-10 15:36:48.644787] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.497 [2024-07-10 15:36:48.644807] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.497 [2024-07-10 15:36:48.644822] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.497 [2024-07-10 15:36:48.645070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.497 [2024-07-10 15:36:48.645077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.062 15:36:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:10.062 15:36:49 -- common/autotest_common.sh@852 -- # return 0 00:13:10.062 15:36:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:10.062 15:36:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:10.062 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:10.062 15:36:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.062 15:36:49 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:10.062 15:36:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.062 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:10.062 [2024-07-10 15:36:49.431694] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.062 15:36:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.062 15:36:49 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:10.062 15:36:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.062 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:10.319 15:36:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.320 15:36:49 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.320 15:36:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.320 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:10.320 [2024-07-10 15:36:49.447929] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.320 15:36:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.320 15:36:49 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:10.320 15:36:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.320 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:10.320 NULL1 00:13:10.320 15:36:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.320 15:36:49 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:10.320 15:36:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.320 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:10.320 Delay0 00:13:10.320 15:36:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.320 15:36:49 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.320 15:36:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.320 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:13:10.320 15:36:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.320 15:36:49 -- target/delete_subsystem.sh@28 -- # perf_pid=2083521 00:13:10.320 15:36:49 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:10.320 15:36:49 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:10.320 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.320 [2024-07-10 15:36:49.522656] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:12.217 15:36:51 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.217 15:36:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:12.217 15:36:51 -- common/autotest_common.sh@10 -- # set +x 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 starting I/O failed: -6 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 starting I/O failed: -6 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 starting I/O failed: -6 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 starting I/O failed: -6 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 starting I/O failed: -6 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 starting I/O failed: -6 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 starting I/O failed: -6 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 starting I/O failed: -6 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 starting I/O failed: -6 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 starting I/O failed: -6 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 starting I/O failed: -6 00:13:12.475 [2024-07-10 15:36:51.611793] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b89b0 is same with the state(5) to be set 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 starting I/O failed: -6 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 starting I/O failed: -6 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 starting I/O failed: -6 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 starting I/O failed: -6 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 starting I/O failed: -6 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 starting I/O failed: -6 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 starting I/O failed: -6 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 starting I/O failed: -6 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 starting I/O failed: -6 00:13:12.475 [2024-07-10 15:36:51.613412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a2c00c480 is same with the state(5) to be set 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Write completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.475 Read completed with error (sct=0, sc=8) 00:13:12.476 Read completed with error (sct=0, sc=8) 00:13:12.476 Read completed with error (sct=0, sc=8) 00:13:13.406 [2024-07-10 15:36:52.579604] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10d75a0 is same with the state(5) to be set 00:13:13.406 Read completed with error (sct=0, sc=8) 00:13:13.406 Read completed with error (sct=0, sc=8) 00:13:13.406 Read completed with error (sct=0, sc=8) 00:13:13.406 Read completed with error (sct=0, sc=8) 00:13:13.406 Read completed with error (sct=0, sc=8) 00:13:13.406 Read completed with error (sct=0, sc=8) 00:13:13.406 Read completed with error (sct=0, sc=8) 00:13:13.406 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Write completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Write completed with error (sct=0, sc=8) 00:13:13.407 Write completed with error (sct=0, sc=8) 00:13:13.407 Write completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 [2024-07-10 15:36:52.615312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7f90 is same with the state(5) to be set 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Write completed with error (sct=0, sc=8) 00:13:13.407 Write completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Write completed with error (sct=0, sc=8) 00:13:13.407 Write completed with error (sct=0, sc=8) 00:13:13.407 Write completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Write completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Write completed with error (sct=0, sc=8) 00:13:13.407 [2024-07-10 15:36:52.616546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8a2c00c1d0 is same with the state(5) to be set 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Write completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Write completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Write completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Write completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 [2024-07-10 15:36:52.617013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b7e10 is same with the state(5) to be set 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Write completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Write completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Write completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Read completed with error (sct=0, sc=8) 00:13:13.407 Write completed with error (sct=0, sc=8) 00:13:13.407 Write completed with error (sct=0, sc=8) 00:13:13.407 [2024-07-10 15:36:52.617181] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10b8c60 is same with the state(5) to be set 00:13:13.407 [2024-07-10 15:36:52.618014] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10d75a0 (9): Bad file descriptor 00:13:13.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:13.407 15:36:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.407 15:36:52 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:13.407 15:36:52 -- target/delete_subsystem.sh@35 -- # kill -0 2083521 00:13:13.407 15:36:52 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:13.407 Initializing NVMe Controllers 00:13:13.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:13.407 Controller IO queue size 128, less than required. 00:13:13.407 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:13.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:13.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:13.407 Initialization complete. Launching workers. 00:13:13.407 ======================================================== 00:13:13.407 Latency(us) 00:13:13.407 Device Information : IOPS MiB/s Average min max 00:13:13.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 166.90 0.08 969940.73 2294.98 1012168.70 00:13:13.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 146.53 0.07 935295.94 328.96 2002606.63 00:13:13.407 ======================================================== 00:13:13.407 Total : 313.43 0.15 953743.88 328.96 2002606.63 00:13:13.407 00:13:13.972 15:36:53 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:13.972 15:36:53 -- target/delete_subsystem.sh@35 -- # kill -0 2083521 00:13:13.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2083521) - No such process 00:13:13.972 15:36:53 -- target/delete_subsystem.sh@45 -- # NOT wait 2083521 00:13:13.972 15:36:53 -- common/autotest_common.sh@640 -- # local es=0 00:13:13.972 15:36:53 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 2083521 00:13:13.972 15:36:53 -- common/autotest_common.sh@628 -- # local arg=wait 00:13:13.972 15:36:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:13.972 15:36:53 -- common/autotest_common.sh@632 -- # type -t wait 00:13:13.972 15:36:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:13.972 15:36:53 -- common/autotest_common.sh@643 -- # wait 2083521 00:13:13.972 15:36:53 -- common/autotest_common.sh@643 -- # es=1 00:13:13.972 15:36:53 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:13.972 15:36:53 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:13.972 15:36:53 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:13.972 15:36:53 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:13.972 15:36:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.972 15:36:53 -- common/autotest_common.sh@10 -- # set +x 00:13:13.972 15:36:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.972 15:36:53 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:13.972 15:36:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.972 15:36:53 -- common/autotest_common.sh@10 -- # set +x 00:13:13.972 [2024-07-10 15:36:53.140303] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.972 15:36:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.972 15:36:53 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.972 15:36:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.972 15:36:53 -- common/autotest_common.sh@10 -- # set +x 00:13:13.972 15:36:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.972 15:36:53 -- target/delete_subsystem.sh@54 -- # perf_pid=2083941 00:13:13.972 15:36:53 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:13.972 15:36:53 -- target/delete_subsystem.sh@57 -- # kill -0 2083941 00:13:13.972 15:36:53 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:13.972 15:36:53 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:13.972 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.972 [2024-07-10 15:36:53.204200] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:14.537 15:36:53 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:14.537 15:36:53 -- target/delete_subsystem.sh@57 -- # kill -0 2083941 00:13:14.537 15:36:53 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:14.794 15:36:54 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:14.794 15:36:54 -- target/delete_subsystem.sh@57 -- # kill -0 2083941 00:13:14.794 15:36:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:15.360 15:36:54 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:15.360 15:36:54 -- target/delete_subsystem.sh@57 -- # kill -0 2083941 00:13:15.360 15:36:54 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:15.992 15:36:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:15.992 15:36:55 -- target/delete_subsystem.sh@57 -- # kill -0 2083941 00:13:15.992 15:36:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:16.557 15:36:55 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:16.557 15:36:55 -- target/delete_subsystem.sh@57 -- # kill -0 2083941 00:13:16.557 15:36:55 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:16.816 15:36:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:16.816 15:36:56 -- target/delete_subsystem.sh@57 -- # kill -0 2083941 00:13:16.816 15:36:56 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:17.074 Initializing NVMe Controllers 00:13:17.074 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:17.074 Controller IO queue size 128, less than required. 00:13:17.074 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:17.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:17.074 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:17.074 Initialization complete. Launching workers. 00:13:17.074 ======================================================== 00:13:17.074 Latency(us) 00:13:17.074 Device Information : IOPS MiB/s Average min max 00:13:17.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003389.02 1000189.10 1042472.02 00:13:17.074 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1006417.47 1000391.90 1042508.23 00:13:17.074 ======================================================== 00:13:17.074 Total : 256.00 0.12 1004903.24 1000189.10 1042508.23 00:13:17.074 00:13:17.332 15:36:56 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:17.332 15:36:56 -- target/delete_subsystem.sh@57 -- # kill -0 2083941 00:13:17.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2083941) - No such process 00:13:17.332 15:36:56 -- target/delete_subsystem.sh@67 -- # wait 2083941 00:13:17.332 15:36:56 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:17.332 15:36:56 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:17.332 15:36:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:17.332 15:36:56 -- nvmf/common.sh@116 -- # sync 00:13:17.332 15:36:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:17.332 15:36:56 -- nvmf/common.sh@119 -- # set +e 00:13:17.332 15:36:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:17.332 15:36:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:17.332 rmmod nvme_tcp 00:13:17.332 rmmod nvme_fabrics 00:13:17.590 rmmod nvme_keyring 00:13:17.590 15:36:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:17.590 15:36:56 -- nvmf/common.sh@123 -- # set -e 00:13:17.590 15:36:56 -- nvmf/common.sh@124 -- # return 0 00:13:17.590 15:36:56 -- nvmf/common.sh@477 -- # '[' -n 2083366 ']' 00:13:17.590 15:36:56 -- nvmf/common.sh@478 -- # killprocess 2083366 00:13:17.590 15:36:56 -- common/autotest_common.sh@926 -- # '[' -z 2083366 ']' 00:13:17.590 15:36:56 -- common/autotest_common.sh@930 -- # kill -0 2083366 00:13:17.590 15:36:56 -- common/autotest_common.sh@931 -- # uname 00:13:17.590 15:36:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:17.590 15:36:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2083366 00:13:17.590 15:36:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:17.590 15:36:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:17.590 15:36:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2083366' 00:13:17.590 killing process with pid 2083366 00:13:17.590 15:36:56 -- common/autotest_common.sh@945 -- # kill 2083366 00:13:17.590 15:36:56 -- common/autotest_common.sh@950 -- # wait 2083366 00:13:17.849 15:36:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:17.849 15:36:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:17.849 15:36:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:17.849 15:36:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:17.849 15:36:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:17.849 15:36:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:17.849 15:36:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:17.849 15:36:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.753 15:36:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:19.753 00:13:19.753 real 0m12.901s 00:13:19.753 user 0m29.145s 00:13:19.753 sys 0m2.970s 00:13:19.753 15:36:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:19.753 15:36:59 -- common/autotest_common.sh@10 -- # set +x 00:13:19.753 ************************************ 00:13:19.753 END TEST nvmf_delete_subsystem 00:13:19.753 ************************************ 00:13:19.753 15:36:59 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:13:19.753 15:36:59 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:19.753 15:36:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:19.753 15:36:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:19.753 15:36:59 -- common/autotest_common.sh@10 -- # set +x 00:13:19.753 ************************************ 00:13:19.753 START TEST nvmf_nvme_cli 00:13:19.753 ************************************ 00:13:19.754 15:36:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:20.012 * Looking for test storage... 00:13:20.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.012 15:36:59 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.012 15:36:59 -- nvmf/common.sh@7 -- # uname -s 00:13:20.012 15:36:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.012 15:36:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.012 15:36:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.012 15:36:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.012 15:36:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.012 15:36:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.012 15:36:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.012 15:36:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.013 15:36:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.013 15:36:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.013 15:36:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:20.013 15:36:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:20.013 15:36:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.013 15:36:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.013 15:36:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.013 15:36:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.013 15:36:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.013 15:36:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.013 15:36:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.013 15:36:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.013 15:36:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.013 15:36:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.013 15:36:59 -- paths/export.sh@5 -- # export PATH 00:13:20.013 15:36:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.013 15:36:59 -- nvmf/common.sh@46 -- # : 0 00:13:20.013 15:36:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:20.013 15:36:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:20.013 15:36:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:20.013 15:36:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.013 15:36:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.013 15:36:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:20.013 15:36:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:20.013 15:36:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:20.013 15:36:59 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:20.013 15:36:59 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:20.013 15:36:59 -- target/nvme_cli.sh@14 -- # devs=() 00:13:20.013 15:36:59 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:20.013 15:36:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:20.013 15:36:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:20.013 15:36:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:20.013 15:36:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:20.013 15:36:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:20.013 15:36:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:20.013 15:36:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:20.013 15:36:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:20.013 15:36:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:20.013 15:36:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:20.013 15:36:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:20.013 15:36:59 -- common/autotest_common.sh@10 -- # set +x 00:13:21.915 15:37:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:21.915 15:37:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:21.915 15:37:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:21.915 15:37:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:21.915 15:37:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:21.915 15:37:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:21.915 15:37:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:21.915 15:37:01 -- nvmf/common.sh@294 -- # net_devs=() 00:13:21.915 15:37:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:21.915 15:37:01 -- nvmf/common.sh@295 -- # e810=() 00:13:21.915 15:37:01 -- nvmf/common.sh@295 -- # local -ga e810 00:13:21.915 15:37:01 -- nvmf/common.sh@296 -- # x722=() 00:13:21.915 15:37:01 -- nvmf/common.sh@296 -- # local -ga x722 00:13:21.915 15:37:01 -- nvmf/common.sh@297 -- # mlx=() 00:13:21.915 15:37:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:21.915 15:37:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:21.915 15:37:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:21.915 15:37:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:21.915 15:37:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:21.915 15:37:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:21.915 15:37:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:21.915 15:37:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:21.915 15:37:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:21.915 15:37:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:21.915 15:37:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:21.915 15:37:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:21.915 15:37:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:21.915 15:37:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:21.915 15:37:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:21.915 15:37:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:21.915 15:37:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:21.915 15:37:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:21.915 15:37:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:21.915 15:37:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:21.915 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:21.915 15:37:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:21.915 15:37:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:21.915 15:37:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.915 15:37:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.915 15:37:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:21.915 15:37:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:21.915 15:37:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:21.915 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:21.915 15:37:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:21.915 15:37:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:21.915 15:37:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:21.915 15:37:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:21.915 15:37:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:21.915 15:37:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:21.915 15:37:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:21.915 15:37:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:21.915 15:37:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:21.915 15:37:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.915 15:37:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:21.915 15:37:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.915 15:37:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:21.915 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:21.915 15:37:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.915 15:37:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:21.915 15:37:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:21.915 15:37:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:21.915 15:37:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:21.915 15:37:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:21.915 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:21.915 15:37:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:21.915 15:37:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:21.915 15:37:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:21.915 15:37:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:21.915 15:37:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:21.915 15:37:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:21.915 15:37:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:21.915 15:37:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:21.915 15:37:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:21.915 15:37:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:21.915 15:37:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:21.915 15:37:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:21.915 15:37:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:21.915 15:37:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:21.915 15:37:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:21.915 15:37:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:21.915 15:37:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:21.915 15:37:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:21.915 15:37:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:21.915 15:37:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:21.915 15:37:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:21.915 15:37:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:21.915 15:37:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:21.915 15:37:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:21.915 15:37:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:21.915 15:37:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:21.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:21.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:13:21.915 00:13:21.915 --- 10.0.0.2 ping statistics --- 00:13:21.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.915 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:13:21.915 15:37:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:21.915 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:21.915 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:13:21.915 00:13:21.915 --- 10.0.0.1 ping statistics --- 00:13:21.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:21.915 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:13:21.915 15:37:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:21.915 15:37:01 -- nvmf/common.sh@410 -- # return 0 00:13:21.915 15:37:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:21.915 15:37:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:21.915 15:37:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:21.915 15:37:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:21.915 15:37:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:21.915 15:37:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:21.915 15:37:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:21.915 15:37:01 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:21.915 15:37:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:21.915 15:37:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:21.915 15:37:01 -- common/autotest_common.sh@10 -- # set +x 00:13:21.915 15:37:01 -- nvmf/common.sh@469 -- # nvmfpid=2086383 00:13:21.915 15:37:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:21.915 15:37:01 -- nvmf/common.sh@470 -- # waitforlisten 2086383 00:13:21.915 15:37:01 -- common/autotest_common.sh@819 -- # '[' -z 2086383 ']' 00:13:21.915 15:37:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.915 15:37:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:21.915 15:37:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.915 15:37:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:21.915 15:37:01 -- common/autotest_common.sh@10 -- # set +x 00:13:22.173 [2024-07-10 15:37:01.299148] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:22.173 [2024-07-10 15:37:01.299216] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.173 EAL: No free 2048 kB hugepages reported on node 1 00:13:22.173 [2024-07-10 15:37:01.363272] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:22.173 [2024-07-10 15:37:01.469244] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:22.173 [2024-07-10 15:37:01.469392] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.173 [2024-07-10 15:37:01.469410] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.173 [2024-07-10 15:37:01.469423] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.173 [2024-07-10 15:37:01.469486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.173 [2024-07-10 15:37:01.469546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:22.173 [2024-07-10 15:37:01.469588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:22.173 [2024-07-10 15:37:01.469591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:23.106 15:37:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:23.106 15:37:02 -- common/autotest_common.sh@852 -- # return 0 00:13:23.106 15:37:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:23.106 15:37:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:23.106 15:37:02 -- common/autotest_common.sh@10 -- # set +x 00:13:23.106 15:37:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:23.106 15:37:02 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:23.106 15:37:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.106 15:37:02 -- common/autotest_common.sh@10 -- # set +x 00:13:23.106 [2024-07-10 15:37:02.262934] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:23.106 15:37:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.106 15:37:02 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:23.106 15:37:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.106 15:37:02 -- common/autotest_common.sh@10 -- # set +x 00:13:23.106 Malloc0 00:13:23.106 15:37:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.106 15:37:02 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:23.106 15:37:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.106 15:37:02 -- common/autotest_common.sh@10 -- # set +x 00:13:23.106 Malloc1 00:13:23.106 15:37:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.106 15:37:02 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:23.106 15:37:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.106 15:37:02 -- common/autotest_common.sh@10 -- # set +x 00:13:23.106 15:37:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.106 15:37:02 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:23.106 15:37:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.106 15:37:02 -- common/autotest_common.sh@10 -- # set +x 00:13:23.106 15:37:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.106 15:37:02 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:23.106 15:37:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.106 15:37:02 -- common/autotest_common.sh@10 -- # set +x 00:13:23.106 15:37:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.107 15:37:02 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:23.107 15:37:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.107 15:37:02 -- common/autotest_common.sh@10 -- # set +x 00:13:23.107 [2024-07-10 15:37:02.347572] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:23.107 15:37:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.107 15:37:02 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:23.107 15:37:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:23.107 15:37:02 -- common/autotest_common.sh@10 -- # set +x 00:13:23.107 15:37:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:23.107 15:37:02 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:13:23.365 00:13:23.365 Discovery Log Number of Records 2, Generation counter 2 00:13:23.365 =====Discovery Log Entry 0====== 00:13:23.365 trtype: tcp 00:13:23.365 adrfam: ipv4 00:13:23.365 subtype: current discovery subsystem 00:13:23.365 treq: not required 00:13:23.365 portid: 0 00:13:23.365 trsvcid: 4420 00:13:23.365 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:23.365 traddr: 10.0.0.2 00:13:23.365 eflags: explicit discovery connections, duplicate discovery information 00:13:23.365 sectype: none 00:13:23.365 =====Discovery Log Entry 1====== 00:13:23.365 trtype: tcp 00:13:23.365 adrfam: ipv4 00:13:23.365 subtype: nvme subsystem 00:13:23.365 treq: not required 00:13:23.365 portid: 0 00:13:23.365 trsvcid: 4420 00:13:23.365 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:23.365 traddr: 10.0.0.2 00:13:23.365 eflags: none 00:13:23.365 sectype: none 00:13:23.365 15:37:02 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:23.365 15:37:02 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:23.365 15:37:02 -- nvmf/common.sh@510 -- # local dev _ 00:13:23.365 15:37:02 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:23.365 15:37:02 -- nvmf/common.sh@509 -- # nvme list 00:13:23.365 15:37:02 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:13:23.365 15:37:02 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:23.365 15:37:02 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:13:23.365 15:37:02 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:23.365 15:37:02 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:23.365 15:37:02 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:23.929 15:37:03 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:23.929 15:37:03 -- common/autotest_common.sh@1177 -- # local i=0 00:13:23.929 15:37:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:23.929 15:37:03 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:13:23.930 15:37:03 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:13:23.930 15:37:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:25.825 15:37:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:25.825 15:37:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:25.825 15:37:05 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:25.825 15:37:05 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:13:25.825 15:37:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:25.825 15:37:05 -- common/autotest_common.sh@1187 -- # return 0 00:13:25.825 15:37:05 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:25.825 15:37:05 -- nvmf/common.sh@510 -- # local dev _ 00:13:25.825 15:37:05 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:25.825 15:37:05 -- nvmf/common.sh@509 -- # nvme list 00:13:26.083 15:37:05 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:13:26.083 15:37:05 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:26.083 15:37:05 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:13:26.083 15:37:05 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:26.083 15:37:05 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:26.083 15:37:05 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:13:26.083 15:37:05 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:26.083 15:37:05 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:26.083 15:37:05 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:13:26.083 15:37:05 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:26.083 15:37:05 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:26.083 /dev/nvme0n1 ]] 00:13:26.083 15:37:05 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:26.083 15:37:05 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:26.083 15:37:05 -- nvmf/common.sh@510 -- # local dev _ 00:13:26.083 15:37:05 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:26.083 15:37:05 -- nvmf/common.sh@509 -- # nvme list 00:13:26.083 15:37:05 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:13:26.083 15:37:05 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:26.083 15:37:05 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:13:26.083 15:37:05 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:26.083 15:37:05 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:26.083 15:37:05 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:13:26.083 15:37:05 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:26.083 15:37:05 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:26.083 15:37:05 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:13:26.083 15:37:05 -- nvmf/common.sh@512 -- # read -r dev _ 00:13:26.083 15:37:05 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:26.083 15:37:05 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:26.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.340 15:37:05 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:26.340 15:37:05 -- common/autotest_common.sh@1198 -- # local i=0 00:13:26.340 15:37:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:26.340 15:37:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.340 15:37:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:26.341 15:37:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:26.341 15:37:05 -- common/autotest_common.sh@1210 -- # return 0 00:13:26.341 15:37:05 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:26.341 15:37:05 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:26.341 15:37:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:26.341 15:37:05 -- common/autotest_common.sh@10 -- # set +x 00:13:26.341 15:37:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:26.341 15:37:05 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:26.341 15:37:05 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:26.341 15:37:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:26.341 15:37:05 -- nvmf/common.sh@116 -- # sync 00:13:26.341 15:37:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:26.341 15:37:05 -- nvmf/common.sh@119 -- # set +e 00:13:26.341 15:37:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:26.341 15:37:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:26.341 rmmod nvme_tcp 00:13:26.599 rmmod nvme_fabrics 00:13:26.599 rmmod nvme_keyring 00:13:26.599 15:37:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:26.599 15:37:05 -- nvmf/common.sh@123 -- # set -e 00:13:26.599 15:37:05 -- nvmf/common.sh@124 -- # return 0 00:13:26.599 15:37:05 -- nvmf/common.sh@477 -- # '[' -n 2086383 ']' 00:13:26.599 15:37:05 -- nvmf/common.sh@478 -- # killprocess 2086383 00:13:26.599 15:37:05 -- common/autotest_common.sh@926 -- # '[' -z 2086383 ']' 00:13:26.599 15:37:05 -- common/autotest_common.sh@930 -- # kill -0 2086383 00:13:26.599 15:37:05 -- common/autotest_common.sh@931 -- # uname 00:13:26.599 15:37:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:26.599 15:37:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2086383 00:13:26.599 15:37:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:26.599 15:37:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:26.599 15:37:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2086383' 00:13:26.599 killing process with pid 2086383 00:13:26.599 15:37:05 -- common/autotest_common.sh@945 -- # kill 2086383 00:13:26.599 15:37:05 -- common/autotest_common.sh@950 -- # wait 2086383 00:13:26.858 15:37:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:26.858 15:37:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:26.858 15:37:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:26.858 15:37:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:26.858 15:37:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:26.858 15:37:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.858 15:37:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:26.858 15:37:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.391 15:37:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:29.391 00:13:29.391 real 0m9.052s 00:13:29.391 user 0m18.979s 00:13:29.391 sys 0m2.144s 00:13:29.391 15:37:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:29.391 15:37:08 -- common/autotest_common.sh@10 -- # set +x 00:13:29.391 ************************************ 00:13:29.391 END TEST nvmf_nvme_cli 00:13:29.391 ************************************ 00:13:29.391 15:37:08 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:13:29.391 15:37:08 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:29.391 15:37:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:29.391 15:37:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:29.391 15:37:08 -- common/autotest_common.sh@10 -- # set +x 00:13:29.391 ************************************ 00:13:29.391 START TEST nvmf_host_management 00:13:29.391 ************************************ 00:13:29.391 15:37:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:29.391 * Looking for test storage... 00:13:29.391 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:29.391 15:37:08 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.391 15:37:08 -- nvmf/common.sh@7 -- # uname -s 00:13:29.391 15:37:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.391 15:37:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.391 15:37:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.391 15:37:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.391 15:37:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.391 15:37:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.391 15:37:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.391 15:37:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.391 15:37:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.391 15:37:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.391 15:37:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:29.391 15:37:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:29.391 15:37:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.391 15:37:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.391 15:37:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.391 15:37:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:29.391 15:37:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.391 15:37:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.391 15:37:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.391 15:37:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.391 15:37:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.391 15:37:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.391 15:37:08 -- paths/export.sh@5 -- # export PATH 00:13:29.391 15:37:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.391 15:37:08 -- nvmf/common.sh@46 -- # : 0 00:13:29.391 15:37:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:29.391 15:37:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:29.391 15:37:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:29.391 15:37:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.391 15:37:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.391 15:37:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:29.391 15:37:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:29.391 15:37:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:29.391 15:37:08 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:29.391 15:37:08 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:29.391 15:37:08 -- target/host_management.sh@104 -- # nvmftestinit 00:13:29.391 15:37:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:29.391 15:37:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.391 15:37:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:29.391 15:37:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:29.391 15:37:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:29.391 15:37:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.391 15:37:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:29.392 15:37:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.392 15:37:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:29.392 15:37:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:29.392 15:37:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:29.392 15:37:08 -- common/autotest_common.sh@10 -- # set +x 00:13:31.289 15:37:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:31.289 15:37:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:31.289 15:37:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:31.289 15:37:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:31.289 15:37:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:31.289 15:37:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:31.289 15:37:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:31.289 15:37:10 -- nvmf/common.sh@294 -- # net_devs=() 00:13:31.289 15:37:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:31.289 15:37:10 -- nvmf/common.sh@295 -- # e810=() 00:13:31.289 15:37:10 -- nvmf/common.sh@295 -- # local -ga e810 00:13:31.289 15:37:10 -- nvmf/common.sh@296 -- # x722=() 00:13:31.289 15:37:10 -- nvmf/common.sh@296 -- # local -ga x722 00:13:31.289 15:37:10 -- nvmf/common.sh@297 -- # mlx=() 00:13:31.289 15:37:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:31.289 15:37:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:31.289 15:37:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:31.289 15:37:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:31.289 15:37:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:31.289 15:37:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:31.289 15:37:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:31.289 15:37:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:31.289 15:37:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:31.289 15:37:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:31.289 15:37:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:31.289 15:37:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:31.289 15:37:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:31.289 15:37:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:31.289 15:37:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:31.289 15:37:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:31.289 15:37:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:31.289 15:37:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:31.289 15:37:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:31.289 15:37:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:31.289 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:31.289 15:37:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:31.289 15:37:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:31.289 15:37:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.289 15:37:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.289 15:37:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:31.289 15:37:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:31.289 15:37:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:31.289 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:31.289 15:37:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:31.289 15:37:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:31.289 15:37:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:31.289 15:37:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:31.289 15:37:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:31.289 15:37:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:31.289 15:37:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:31.289 15:37:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:31.289 15:37:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:31.289 15:37:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.289 15:37:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:31.289 15:37:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.289 15:37:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:31.289 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:31.289 15:37:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.289 15:37:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:31.289 15:37:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:31.289 15:37:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:31.289 15:37:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:31.289 15:37:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:31.289 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:31.289 15:37:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:31.289 15:37:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:31.289 15:37:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:31.289 15:37:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:31.289 15:37:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:31.289 15:37:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:31.289 15:37:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:31.289 15:37:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:31.289 15:37:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:31.289 15:37:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:31.289 15:37:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:31.289 15:37:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:31.289 15:37:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:31.289 15:37:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:31.289 15:37:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:31.289 15:37:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:31.289 15:37:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:31.289 15:37:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:31.289 15:37:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:31.289 15:37:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:31.289 15:37:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:31.289 15:37:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:31.289 15:37:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:31.289 15:37:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:31.289 15:37:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:31.289 15:37:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:31.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:31.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:13:31.289 00:13:31.289 --- 10.0.0.2 ping statistics --- 00:13:31.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.289 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:13:31.289 15:37:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:31.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:31.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:13:31.289 00:13:31.289 --- 10.0.0.1 ping statistics --- 00:13:31.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:31.289 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:13:31.289 15:37:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:31.289 15:37:10 -- nvmf/common.sh@410 -- # return 0 00:13:31.289 15:37:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:31.289 15:37:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:31.289 15:37:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:31.289 15:37:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:31.289 15:37:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:31.289 15:37:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:31.289 15:37:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:31.289 15:37:10 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:13:31.289 15:37:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:13:31.289 15:37:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:31.289 15:37:10 -- common/autotest_common.sh@10 -- # set +x 00:13:31.289 ************************************ 00:13:31.289 START TEST nvmf_host_management 00:13:31.289 ************************************ 00:13:31.289 15:37:10 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:13:31.289 15:37:10 -- target/host_management.sh@69 -- # starttarget 00:13:31.289 15:37:10 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:13:31.289 15:37:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:31.289 15:37:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:31.289 15:37:10 -- common/autotest_common.sh@10 -- # set +x 00:13:31.289 15:37:10 -- nvmf/common.sh@469 -- # nvmfpid=2088850 00:13:31.289 15:37:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:13:31.289 15:37:10 -- nvmf/common.sh@470 -- # waitforlisten 2088850 00:13:31.289 15:37:10 -- common/autotest_common.sh@819 -- # '[' -z 2088850 ']' 00:13:31.289 15:37:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.289 15:37:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:31.289 15:37:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.289 15:37:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:31.289 15:37:10 -- common/autotest_common.sh@10 -- # set +x 00:13:31.289 [2024-07-10 15:37:10.389603] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:31.289 [2024-07-10 15:37:10.389677] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:31.289 EAL: No free 2048 kB hugepages reported on node 1 00:13:31.289 [2024-07-10 15:37:10.461080] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:31.289 [2024-07-10 15:37:10.577310] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:31.289 [2024-07-10 15:37:10.577477] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:31.289 [2024-07-10 15:37:10.577506] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:31.289 [2024-07-10 15:37:10.577520] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:31.289 [2024-07-10 15:37:10.577634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:31.289 [2024-07-10 15:37:10.577725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:31.289 [2024-07-10 15:37:10.577769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:13:31.289 [2024-07-10 15:37:10.577772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.220 15:37:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:32.220 15:37:11 -- common/autotest_common.sh@852 -- # return 0 00:13:32.220 15:37:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:32.220 15:37:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:32.220 15:37:11 -- common/autotest_common.sh@10 -- # set +x 00:13:32.220 15:37:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.220 15:37:11 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:32.220 15:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.220 15:37:11 -- common/autotest_common.sh@10 -- # set +x 00:13:32.220 [2024-07-10 15:37:11.346904] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.220 15:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.220 15:37:11 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:13:32.220 15:37:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:32.220 15:37:11 -- common/autotest_common.sh@10 -- # set +x 00:13:32.221 15:37:11 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:32.221 15:37:11 -- target/host_management.sh@23 -- # cat 00:13:32.221 15:37:11 -- target/host_management.sh@30 -- # rpc_cmd 00:13:32.221 15:37:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:32.221 15:37:11 -- common/autotest_common.sh@10 -- # set +x 00:13:32.221 Malloc0 00:13:32.221 [2024-07-10 15:37:11.406024] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.221 15:37:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:32.221 15:37:11 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:13:32.221 15:37:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:32.221 15:37:11 -- common/autotest_common.sh@10 -- # set +x 00:13:32.221 15:37:11 -- target/host_management.sh@73 -- # perfpid=2089022 00:13:32.221 15:37:11 -- target/host_management.sh@74 -- # waitforlisten 2089022 /var/tmp/bdevperf.sock 00:13:32.221 15:37:11 -- common/autotest_common.sh@819 -- # '[' -z 2089022 ']' 00:13:32.221 15:37:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:13:32.221 15:37:11 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:13:32.221 15:37:11 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:13:32.221 15:37:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:32.221 15:37:11 -- nvmf/common.sh@520 -- # config=() 00:13:32.221 15:37:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:13:32.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:13:32.221 15:37:11 -- nvmf/common.sh@520 -- # local subsystem config 00:13:32.221 15:37:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:32.221 15:37:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:32.221 15:37:11 -- common/autotest_common.sh@10 -- # set +x 00:13:32.221 15:37:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:32.221 { 00:13:32.221 "params": { 00:13:32.221 "name": "Nvme$subsystem", 00:13:32.221 "trtype": "$TEST_TRANSPORT", 00:13:32.221 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:32.221 "adrfam": "ipv4", 00:13:32.221 "trsvcid": "$NVMF_PORT", 00:13:32.221 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:32.221 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:32.221 "hdgst": ${hdgst:-false}, 00:13:32.221 "ddgst": ${ddgst:-false} 00:13:32.221 }, 00:13:32.221 "method": "bdev_nvme_attach_controller" 00:13:32.221 } 00:13:32.221 EOF 00:13:32.221 )") 00:13:32.221 15:37:11 -- nvmf/common.sh@542 -- # cat 00:13:32.221 15:37:11 -- nvmf/common.sh@544 -- # jq . 00:13:32.221 15:37:11 -- nvmf/common.sh@545 -- # IFS=, 00:13:32.221 15:37:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:32.221 "params": { 00:13:32.221 "name": "Nvme0", 00:13:32.221 "trtype": "tcp", 00:13:32.221 "traddr": "10.0.0.2", 00:13:32.221 "adrfam": "ipv4", 00:13:32.221 "trsvcid": "4420", 00:13:32.221 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:32.221 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:32.221 "hdgst": false, 00:13:32.221 "ddgst": false 00:13:32.221 }, 00:13:32.221 "method": "bdev_nvme_attach_controller" 00:13:32.221 }' 00:13:32.221 [2024-07-10 15:37:11.471971] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:32.221 [2024-07-10 15:37:11.472051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2089022 ] 00:13:32.221 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.221 [2024-07-10 15:37:11.537161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.478 [2024-07-10 15:37:11.647150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.735 Running I/O for 10 seconds... 00:13:33.302 15:37:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:33.302 15:37:12 -- common/autotest_common.sh@852 -- # return 0 00:13:33.302 15:37:12 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:13:33.302 15:37:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.302 15:37:12 -- common/autotest_common.sh@10 -- # set +x 00:13:33.302 15:37:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.302 15:37:12 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:33.302 15:37:12 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:13:33.302 15:37:12 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:13:33.302 15:37:12 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:13:33.302 15:37:12 -- target/host_management.sh@52 -- # local ret=1 00:13:33.302 15:37:12 -- target/host_management.sh@53 -- # local i 00:13:33.302 15:37:12 -- target/host_management.sh@54 -- # (( i = 10 )) 00:13:33.302 15:37:12 -- target/host_management.sh@54 -- # (( i != 0 )) 00:13:33.302 15:37:12 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:13:33.302 15:37:12 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:13:33.302 15:37:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.302 15:37:12 -- common/autotest_common.sh@10 -- # set +x 00:13:33.302 15:37:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.302 15:37:12 -- target/host_management.sh@55 -- # read_io_count=1440 00:13:33.302 15:37:12 -- target/host_management.sh@58 -- # '[' 1440 -ge 100 ']' 00:13:33.302 15:37:12 -- target/host_management.sh@59 -- # ret=0 00:13:33.302 15:37:12 -- target/host_management.sh@60 -- # break 00:13:33.302 15:37:12 -- target/host_management.sh@64 -- # return 0 00:13:33.302 15:37:12 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:33.302 15:37:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.302 15:37:12 -- common/autotest_common.sh@10 -- # set +x 00:13:33.302 [2024-07-10 15:37:12.469750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.302 [2024-07-10 15:37:12.469853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.302 [2024-07-10 15:37:12.469870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.302 [2024-07-10 15:37:12.469883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.302 [2024-07-10 15:37:12.469897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.302 [2024-07-10 15:37:12.469911] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.469923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.469936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.469948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.469960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.469972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.469984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470029] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470053] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470064] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470088] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470501] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470589] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2680480 is same with the state(5) to be set 00:13:33.303 [2024-07-10 15:37:12.470797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.303 [2024-07-10 15:37:12.470844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.303 [2024-07-10 15:37:12.470887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.303 [2024-07-10 15:37:12.470924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.303 [2024-07-10 15:37:12.470954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.303 [2024-07-10 15:37:12.470979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.303 [2024-07-10 15:37:12.471006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.303 [2024-07-10 15:37:12.471032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.303 [2024-07-10 15:37:12.471058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.303 [2024-07-10 15:37:12.471082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.303 [2024-07-10 15:37:12.471108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.303 [2024-07-10 15:37:12.471134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.303 [2024-07-10 15:37:12.471160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.303 [2024-07-10 15:37:12.471184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.303 [2024-07-10 15:37:12.471210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.303 [2024-07-10 15:37:12.471234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.303 [2024-07-10 15:37:12.471261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.303 [2024-07-10 15:37:12.471285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.303 [2024-07-10 15:37:12.471312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.303 [2024-07-10 15:37:12.471335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.303 [2024-07-10 15:37:12.471362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.303 [2024-07-10 15:37:12.471386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.303 [2024-07-10 15:37:12.471423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.303 [2024-07-10 15:37:12.471459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.303 [2024-07-10 15:37:12.471487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.303 [2024-07-10 15:37:12.471510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.303 [2024-07-10 15:37:12.471539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.303 [2024-07-10 15:37:12.471562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.303 [2024-07-10 15:37:12.471596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.303 [2024-07-10 15:37:12.471621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.303 [2024-07-10 15:37:12.471647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.471670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.471698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.471732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.471760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.471783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.471811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.471833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.471861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.471884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.471911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.471935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.471963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.471986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.472014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.472038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.472065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.472088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.472116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.472139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.472166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.472189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.472216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.472244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.472273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.472296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.472324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.472347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.472373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.472397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.472437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.472463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.472489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.472513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.472540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.472566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.472591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.472616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.472642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.472667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.472693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.472719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.472745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.472771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.472796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.472821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.472846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.472872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.472902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.472927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.472953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.472978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.473003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.473029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.473056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.473081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.473106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.473131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.473157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.473182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.473207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.473231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.473257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.473281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.473308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.473331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.473360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.473385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.473422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.473456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.473483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.473507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.473534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.473563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.473591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.473616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.473642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.473666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.473693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.473727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.473754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.473778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.473807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.304 [2024-07-10 15:37:12.473830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.304 [2024-07-10 15:37:12.473859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.305 [2024-07-10 15:37:12.473882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.305 [2024-07-10 15:37:12.473910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.305 [2024-07-10 15:37:12.473933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.305 [2024-07-10 15:37:12.473962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.305 [2024-07-10 15:37:12.473985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.305 [2024-07-10 15:37:12.474013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.305 [2024-07-10 15:37:12.474037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.305 [2024-07-10 15:37:12.474065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.305 [2024-07-10 15:37:12.474089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.305 [2024-07-10 15:37:12.474117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.305 [2024-07-10 15:37:12.474140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.305 15:37:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.305 [2024-07-10 15:37:12.474169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:13:33.305 [2024-07-10 15:37:12.474195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:33.305 [2024-07-10 15:37:12.474226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x704b50 is same with the state(5) to be set 00:13:33.305 15:37:12 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:13:33.305 [2024-07-10 15:37:12.474316] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x704b50 was disconnected and freed. reset controller. 00:13:33.305 15:37:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:33.305 15:37:12 -- common/autotest_common.sh@10 -- # set +x 00:13:33.305 [2024-07-10 15:37:12.475695] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:13:33.305 task offset: 66176 on job bdev=Nvme0n1 fails 00:13:33.305 00:13:33.305 Latency(us) 00:13:33.305 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:33.305 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:33.305 Job: Nvme0n1 ended in about 0.60 seconds with error 00:13:33.305 Verification LBA range: start 0x0 length 0x400 00:13:33.305 Nvme0n1 : 0.60 2536.94 158.56 106.54 0.00 23909.72 6189.51 26214.40 00:13:33.305 =================================================================================================================== 00:13:33.305 Total : 2536.94 158.56 106.54 0.00 23909.72 6189.51 26214.40 00:13:33.305 [2024-07-10 15:37:12.478000] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:13:33.305 [2024-07-10 15:37:12.478040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x707400 (9): Bad file descriptor 00:13:33.305 15:37:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:33.305 15:37:12 -- target/host_management.sh@87 -- # sleep 1 00:13:33.305 [2024-07-10 15:37:12.523205] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:13:34.236 15:37:13 -- target/host_management.sh@91 -- # kill -9 2089022 00:13:34.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2089022) - No such process 00:13:34.236 15:37:13 -- target/host_management.sh@91 -- # true 00:13:34.236 15:37:13 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:13:34.236 15:37:13 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:13:34.236 15:37:13 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:13:34.236 15:37:13 -- nvmf/common.sh@520 -- # config=() 00:13:34.236 15:37:13 -- nvmf/common.sh@520 -- # local subsystem config 00:13:34.236 15:37:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:13:34.236 15:37:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:13:34.236 { 00:13:34.236 "params": { 00:13:34.236 "name": "Nvme$subsystem", 00:13:34.236 "trtype": "$TEST_TRANSPORT", 00:13:34.236 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:34.236 "adrfam": "ipv4", 00:13:34.236 "trsvcid": "$NVMF_PORT", 00:13:34.236 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:34.236 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:34.236 "hdgst": ${hdgst:-false}, 00:13:34.236 "ddgst": ${ddgst:-false} 00:13:34.236 }, 00:13:34.236 "method": "bdev_nvme_attach_controller" 00:13:34.236 } 00:13:34.236 EOF 00:13:34.236 )") 00:13:34.236 15:37:13 -- nvmf/common.sh@542 -- # cat 00:13:34.236 15:37:13 -- nvmf/common.sh@544 -- # jq . 00:13:34.236 15:37:13 -- nvmf/common.sh@545 -- # IFS=, 00:13:34.236 15:37:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:13:34.236 "params": { 00:13:34.236 "name": "Nvme0", 00:13:34.236 "trtype": "tcp", 00:13:34.236 "traddr": "10.0.0.2", 00:13:34.236 "adrfam": "ipv4", 00:13:34.236 "trsvcid": "4420", 00:13:34.236 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:13:34.236 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:13:34.236 "hdgst": false, 00:13:34.236 "ddgst": false 00:13:34.236 }, 00:13:34.236 "method": "bdev_nvme_attach_controller" 00:13:34.236 }' 00:13:34.236 [2024-07-10 15:37:13.525142] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:34.236 [2024-07-10 15:37:13.525221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2089307 ] 00:13:34.236 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.236 [2024-07-10 15:37:13.585860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:34.493 [2024-07-10 15:37:13.696813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.751 Running I/O for 1 seconds... 00:13:35.684 00:13:35.684 Latency(us) 00:13:35.684 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:35.684 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:13:35.684 Verification LBA range: start 0x0 length 0x400 00:13:35.684 Nvme0n1 : 1.01 3409.37 213.09 0.00 0.00 18487.96 2451.53 26214.40 00:13:35.684 =================================================================================================================== 00:13:35.684 Total : 3409.37 213.09 0.00 0.00 18487.96 2451.53 26214.40 00:13:35.942 15:37:15 -- target/host_management.sh@101 -- # stoptarget 00:13:35.942 15:37:15 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:13:35.942 15:37:15 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:13:35.942 15:37:15 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:13:35.942 15:37:15 -- target/host_management.sh@40 -- # nvmftestfini 00:13:35.942 15:37:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:35.942 15:37:15 -- nvmf/common.sh@116 -- # sync 00:13:35.942 15:37:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:35.942 15:37:15 -- nvmf/common.sh@119 -- # set +e 00:13:35.942 15:37:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:35.942 15:37:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:35.942 rmmod nvme_tcp 00:13:35.942 rmmod nvme_fabrics 00:13:35.942 rmmod nvme_keyring 00:13:35.942 15:37:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:35.942 15:37:15 -- nvmf/common.sh@123 -- # set -e 00:13:35.942 15:37:15 -- nvmf/common.sh@124 -- # return 0 00:13:35.942 15:37:15 -- nvmf/common.sh@477 -- # '[' -n 2088850 ']' 00:13:35.942 15:37:15 -- nvmf/common.sh@478 -- # killprocess 2088850 00:13:35.942 15:37:15 -- common/autotest_common.sh@926 -- # '[' -z 2088850 ']' 00:13:35.942 15:37:15 -- common/autotest_common.sh@930 -- # kill -0 2088850 00:13:35.942 15:37:15 -- common/autotest_common.sh@931 -- # uname 00:13:35.942 15:37:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:35.942 15:37:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2088850 00:13:35.942 15:37:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:35.942 15:37:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:35.942 15:37:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2088850' 00:13:35.942 killing process with pid 2088850 00:13:35.942 15:37:15 -- common/autotest_common.sh@945 -- # kill 2088850 00:13:35.943 15:37:15 -- common/autotest_common.sh@950 -- # wait 2088850 00:13:36.201 [2024-07-10 15:37:15.548636] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:13:36.201 15:37:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:36.459 15:37:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:36.459 15:37:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:36.459 15:37:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:36.459 15:37:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:36.459 15:37:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:36.459 15:37:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:36.459 15:37:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.360 15:37:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:38.360 00:13:38.360 real 0m7.271s 00:13:38.360 user 0m22.182s 00:13:38.360 sys 0m1.460s 00:13:38.360 15:37:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.360 15:37:17 -- common/autotest_common.sh@10 -- # set +x 00:13:38.360 ************************************ 00:13:38.360 END TEST nvmf_host_management 00:13:38.361 ************************************ 00:13:38.361 15:37:17 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:13:38.361 00:13:38.361 real 0m9.453s 00:13:38.361 user 0m22.914s 00:13:38.361 sys 0m2.937s 00:13:38.361 15:37:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:38.361 15:37:17 -- common/autotest_common.sh@10 -- # set +x 00:13:38.361 ************************************ 00:13:38.361 END TEST nvmf_host_management 00:13:38.361 ************************************ 00:13:38.361 15:37:17 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:38.361 15:37:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:38.361 15:37:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:38.361 15:37:17 -- common/autotest_common.sh@10 -- # set +x 00:13:38.361 ************************************ 00:13:38.361 START TEST nvmf_lvol 00:13:38.361 ************************************ 00:13:38.361 15:37:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:13:38.361 * Looking for test storage... 00:13:38.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:38.361 15:37:17 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:38.361 15:37:17 -- nvmf/common.sh@7 -- # uname -s 00:13:38.361 15:37:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:38.361 15:37:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:38.361 15:37:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:38.361 15:37:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:38.361 15:37:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:38.361 15:37:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:38.361 15:37:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:38.361 15:37:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:38.361 15:37:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:38.361 15:37:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:38.361 15:37:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:38.361 15:37:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:38.361 15:37:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:38.361 15:37:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:38.361 15:37:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:38.361 15:37:17 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:38.361 15:37:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:38.361 15:37:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:38.361 15:37:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:38.361 15:37:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.361 15:37:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.361 15:37:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.361 15:37:17 -- paths/export.sh@5 -- # export PATH 00:13:38.361 15:37:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:38.361 15:37:17 -- nvmf/common.sh@46 -- # : 0 00:13:38.361 15:37:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:38.361 15:37:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:38.361 15:37:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:38.361 15:37:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:38.361 15:37:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:38.361 15:37:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:38.361 15:37:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:38.361 15:37:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:38.361 15:37:17 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:38.361 15:37:17 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:38.361 15:37:17 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:13:38.361 15:37:17 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:13:38.361 15:37:17 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:38.361 15:37:17 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:13:38.361 15:37:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:38.361 15:37:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:38.361 15:37:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:38.361 15:37:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:38.361 15:37:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:38.361 15:37:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:38.361 15:37:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:38.361 15:37:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:38.619 15:37:17 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:38.619 15:37:17 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:38.619 15:37:17 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:38.619 15:37:17 -- common/autotest_common.sh@10 -- # set +x 00:13:40.519 15:37:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:40.519 15:37:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:40.519 15:37:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:40.519 15:37:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:40.519 15:37:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:40.519 15:37:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:40.519 15:37:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:40.519 15:37:19 -- nvmf/common.sh@294 -- # net_devs=() 00:13:40.519 15:37:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:40.519 15:37:19 -- nvmf/common.sh@295 -- # e810=() 00:13:40.519 15:37:19 -- nvmf/common.sh@295 -- # local -ga e810 00:13:40.519 15:37:19 -- nvmf/common.sh@296 -- # x722=() 00:13:40.519 15:37:19 -- nvmf/common.sh@296 -- # local -ga x722 00:13:40.519 15:37:19 -- nvmf/common.sh@297 -- # mlx=() 00:13:40.519 15:37:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:40.519 15:37:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:40.519 15:37:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:40.519 15:37:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:40.519 15:37:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:40.519 15:37:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:40.519 15:37:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:40.519 15:37:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:40.519 15:37:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:40.519 15:37:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:40.519 15:37:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:40.519 15:37:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:40.519 15:37:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:40.519 15:37:19 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:40.519 15:37:19 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:40.519 15:37:19 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:40.519 15:37:19 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:40.519 15:37:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:40.519 15:37:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:40.519 15:37:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:40.519 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:40.519 15:37:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:40.519 15:37:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:40.519 15:37:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.519 15:37:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.519 15:37:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:40.519 15:37:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:40.519 15:37:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:40.519 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:40.519 15:37:19 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:40.519 15:37:19 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:40.519 15:37:19 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:40.519 15:37:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:40.519 15:37:19 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:40.519 15:37:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:40.519 15:37:19 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:40.519 15:37:19 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:40.519 15:37:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:40.519 15:37:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.519 15:37:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:40.519 15:37:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.519 15:37:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:40.519 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:40.519 15:37:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.519 15:37:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:40.519 15:37:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:40.519 15:37:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:40.519 15:37:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:40.519 15:37:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:40.519 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:40.519 15:37:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:40.519 15:37:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:40.519 15:37:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:40.519 15:37:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:40.519 15:37:19 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:40.519 15:37:19 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:40.519 15:37:19 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:40.519 15:37:19 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:40.519 15:37:19 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:40.519 15:37:19 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:40.519 15:37:19 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:40.519 15:37:19 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:40.519 15:37:19 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:40.519 15:37:19 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:40.519 15:37:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:40.519 15:37:19 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:40.519 15:37:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:40.519 15:37:19 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:40.519 15:37:19 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:40.519 15:37:19 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:40.519 15:37:19 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:40.519 15:37:19 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:40.519 15:37:19 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:40.519 15:37:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:40.519 15:37:19 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:40.519 15:37:19 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:40.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:40.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:13:40.519 00:13:40.519 --- 10.0.0.2 ping statistics --- 00:13:40.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.519 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:13:40.519 15:37:19 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:40.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:40.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:13:40.519 00:13:40.519 --- 10.0.0.1 ping statistics --- 00:13:40.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:40.519 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:13:40.519 15:37:19 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:40.519 15:37:19 -- nvmf/common.sh@410 -- # return 0 00:13:40.519 15:37:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:40.519 15:37:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:40.519 15:37:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:40.519 15:37:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:40.519 15:37:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:40.519 15:37:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:40.519 15:37:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:40.776 15:37:19 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:13:40.776 15:37:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:40.776 15:37:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:40.776 15:37:19 -- common/autotest_common.sh@10 -- # set +x 00:13:40.776 15:37:19 -- nvmf/common.sh@469 -- # nvmfpid=2091534 00:13:40.776 15:37:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:40.776 15:37:19 -- nvmf/common.sh@470 -- # waitforlisten 2091534 00:13:40.776 15:37:19 -- common/autotest_common.sh@819 -- # '[' -z 2091534 ']' 00:13:40.776 15:37:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:40.776 15:37:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:40.776 15:37:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:40.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:40.776 15:37:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:40.776 15:37:19 -- common/autotest_common.sh@10 -- # set +x 00:13:40.776 [2024-07-10 15:37:19.947395] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:40.776 [2024-07-10 15:37:19.947479] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:40.776 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.776 [2024-07-10 15:37:20.013682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:40.776 [2024-07-10 15:37:20.126351] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:40.776 [2024-07-10 15:37:20.126518] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:40.776 [2024-07-10 15:37:20.126536] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:40.776 [2024-07-10 15:37:20.126548] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:40.776 [2024-07-10 15:37:20.126607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:40.776 [2024-07-10 15:37:20.130459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:40.776 [2024-07-10 15:37:20.130463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:41.707 15:37:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:41.707 15:37:20 -- common/autotest_common.sh@852 -- # return 0 00:13:41.707 15:37:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:41.707 15:37:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:41.707 15:37:20 -- common/autotest_common.sh@10 -- # set +x 00:13:41.707 15:37:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:41.707 15:37:20 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:41.965 [2024-07-10 15:37:21.202104] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:41.965 15:37:21 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:42.222 15:37:21 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:13:42.222 15:37:21 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:13:42.480 15:37:21 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:13:42.480 15:37:21 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:13:42.738 15:37:21 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:13:42.995 15:37:22 -- target/nvmf_lvol.sh@29 -- # lvs=416187da-e346-4253-b4bf-1896dc83a585 00:13:42.995 15:37:22 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 416187da-e346-4253-b4bf-1896dc83a585 lvol 20 00:13:43.251 15:37:22 -- target/nvmf_lvol.sh@32 -- # lvol=a5652654-19e9-49c4-b93f-cad2e162bcde 00:13:43.251 15:37:22 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:43.510 15:37:22 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a5652654-19e9-49c4-b93f-cad2e162bcde 00:13:43.767 15:37:22 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:44.023 [2024-07-10 15:37:23.169229] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:44.023 15:37:23 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:44.280 15:37:23 -- target/nvmf_lvol.sh@42 -- # perf_pid=2091974 00:13:44.280 15:37:23 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:13:44.280 15:37:23 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:13:44.280 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.211 15:37:24 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot a5652654-19e9-49c4-b93f-cad2e162bcde MY_SNAPSHOT 00:13:45.507 15:37:24 -- target/nvmf_lvol.sh@47 -- # snapshot=9cffaa8f-7ad0-4568-8524-ca9786d4eaee 00:13:45.507 15:37:24 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize a5652654-19e9-49c4-b93f-cad2e162bcde 30 00:13:45.764 15:37:24 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9cffaa8f-7ad0-4568-8524-ca9786d4eaee MY_CLONE 00:13:46.020 15:37:25 -- target/nvmf_lvol.sh@49 -- # clone=7856f4a0-d9ee-4e48-b063-09bd8e133b2c 00:13:46.020 15:37:25 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 7856f4a0-d9ee-4e48-b063-09bd8e133b2c 00:13:46.276 15:37:25 -- target/nvmf_lvol.sh@53 -- # wait 2091974 00:13:56.295 Initializing NVMe Controllers 00:13:56.295 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:56.295 Controller IO queue size 128, less than required. 00:13:56.295 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:56.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:13:56.295 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:13:56.295 Initialization complete. Launching workers. 00:13:56.295 ======================================================== 00:13:56.295 Latency(us) 00:13:56.295 Device Information : IOPS MiB/s Average min max 00:13:56.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11074.39 43.26 11565.36 2304.05 73828.05 00:13:56.295 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10933.30 42.71 11712.96 2131.53 66452.93 00:13:56.295 ======================================================== 00:13:56.295 Total : 22007.68 85.97 11638.69 2131.53 73828.05 00:13:56.295 00:13:56.295 15:37:33 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:56.295 15:37:34 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a5652654-19e9-49c4-b93f-cad2e162bcde 00:13:56.295 15:37:34 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 416187da-e346-4253-b4bf-1896dc83a585 00:13:56.295 15:37:34 -- target/nvmf_lvol.sh@60 -- # rm -f 00:13:56.295 15:37:34 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:13:56.295 15:37:34 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:13:56.295 15:37:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:56.295 15:37:34 -- nvmf/common.sh@116 -- # sync 00:13:56.295 15:37:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:56.295 15:37:34 -- nvmf/common.sh@119 -- # set +e 00:13:56.295 15:37:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:56.295 15:37:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:56.295 rmmod nvme_tcp 00:13:56.295 rmmod nvme_fabrics 00:13:56.295 rmmod nvme_keyring 00:13:56.295 15:37:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:56.295 15:37:34 -- nvmf/common.sh@123 -- # set -e 00:13:56.295 15:37:34 -- nvmf/common.sh@124 -- # return 0 00:13:56.295 15:37:34 -- nvmf/common.sh@477 -- # '[' -n 2091534 ']' 00:13:56.295 15:37:34 -- nvmf/common.sh@478 -- # killprocess 2091534 00:13:56.295 15:37:34 -- common/autotest_common.sh@926 -- # '[' -z 2091534 ']' 00:13:56.295 15:37:34 -- common/autotest_common.sh@930 -- # kill -0 2091534 00:13:56.295 15:37:34 -- common/autotest_common.sh@931 -- # uname 00:13:56.295 15:37:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:56.295 15:37:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2091534 00:13:56.295 15:37:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:56.295 15:37:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:56.295 15:37:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2091534' 00:13:56.295 killing process with pid 2091534 00:13:56.295 15:37:34 -- common/autotest_common.sh@945 -- # kill 2091534 00:13:56.295 15:37:34 -- common/autotest_common.sh@950 -- # wait 2091534 00:13:56.295 15:37:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:56.295 15:37:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:56.295 15:37:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:56.295 15:37:35 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:56.295 15:37:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:56.295 15:37:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.295 15:37:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.295 15:37:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.198 15:37:37 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:58.198 00:13:58.198 real 0m19.410s 00:13:58.198 user 1m5.537s 00:13:58.198 sys 0m5.877s 00:13:58.198 15:37:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:58.198 15:37:37 -- common/autotest_common.sh@10 -- # set +x 00:13:58.198 ************************************ 00:13:58.198 END TEST nvmf_lvol 00:13:58.198 ************************************ 00:13:58.198 15:37:37 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:58.198 15:37:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:58.198 15:37:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:58.198 15:37:37 -- common/autotest_common.sh@10 -- # set +x 00:13:58.198 ************************************ 00:13:58.198 START TEST nvmf_lvs_grow 00:13:58.198 ************************************ 00:13:58.198 15:37:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:13:58.198 * Looking for test storage... 00:13:58.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.198 15:37:37 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.198 15:37:37 -- nvmf/common.sh@7 -- # uname -s 00:13:58.198 15:37:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.198 15:37:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.198 15:37:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.198 15:37:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.198 15:37:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.198 15:37:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.198 15:37:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.198 15:37:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.198 15:37:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.198 15:37:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.198 15:37:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:58.198 15:37:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:58.198 15:37:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.198 15:37:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.198 15:37:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.198 15:37:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.198 15:37:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.198 15:37:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.198 15:37:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.198 15:37:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.198 15:37:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.198 15:37:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.198 15:37:37 -- paths/export.sh@5 -- # export PATH 00:13:58.198 15:37:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.198 15:37:37 -- nvmf/common.sh@46 -- # : 0 00:13:58.198 15:37:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:58.198 15:37:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:58.198 15:37:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:58.198 15:37:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.198 15:37:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.198 15:37:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:58.198 15:37:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:58.199 15:37:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:58.199 15:37:37 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:58.199 15:37:37 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:13:58.199 15:37:37 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:13:58.199 15:37:37 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:58.199 15:37:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.199 15:37:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:58.199 15:37:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:58.199 15:37:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:58.199 15:37:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.199 15:37:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.199 15:37:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.199 15:37:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:58.199 15:37:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:58.199 15:37:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:58.199 15:37:37 -- common/autotest_common.sh@10 -- # set +x 00:14:00.126 15:37:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:00.126 15:37:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:00.126 15:37:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:00.126 15:37:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:00.126 15:37:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:00.126 15:37:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:00.126 15:37:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:00.126 15:37:39 -- nvmf/common.sh@294 -- # net_devs=() 00:14:00.126 15:37:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:00.126 15:37:39 -- nvmf/common.sh@295 -- # e810=() 00:14:00.126 15:37:39 -- nvmf/common.sh@295 -- # local -ga e810 00:14:00.126 15:37:39 -- nvmf/common.sh@296 -- # x722=() 00:14:00.126 15:37:39 -- nvmf/common.sh@296 -- # local -ga x722 00:14:00.126 15:37:39 -- nvmf/common.sh@297 -- # mlx=() 00:14:00.126 15:37:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:00.126 15:37:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:00.126 15:37:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:00.126 15:37:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:00.126 15:37:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:00.126 15:37:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:00.126 15:37:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:00.126 15:37:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:00.126 15:37:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:00.126 15:37:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:00.126 15:37:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:00.126 15:37:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:00.126 15:37:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:00.126 15:37:39 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:00.126 15:37:39 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:00.126 15:37:39 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:00.126 15:37:39 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:00.126 15:37:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:00.126 15:37:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:00.126 15:37:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:00.126 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:00.126 15:37:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:00.126 15:37:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:00.126 15:37:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.126 15:37:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.126 15:37:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:00.126 15:37:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:00.126 15:37:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:00.126 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:00.126 15:37:39 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:00.126 15:37:39 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:00.126 15:37:39 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.126 15:37:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.126 15:37:39 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:00.126 15:37:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:00.126 15:37:39 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:00.126 15:37:39 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:00.126 15:37:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:00.126 15:37:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.126 15:37:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:00.126 15:37:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.126 15:37:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:00.126 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:00.126 15:37:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.126 15:37:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:00.126 15:37:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.126 15:37:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:00.126 15:37:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.126 15:37:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:00.126 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:00.126 15:37:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.126 15:37:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:00.126 15:37:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:00.126 15:37:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:00.127 15:37:39 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:00.127 15:37:39 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:00.127 15:37:39 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.127 15:37:39 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.127 15:37:39 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:00.127 15:37:39 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:00.127 15:37:39 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:00.127 15:37:39 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:00.127 15:37:39 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:00.127 15:37:39 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:00.127 15:37:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.127 15:37:39 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:00.127 15:37:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:00.127 15:37:39 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:00.127 15:37:39 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:00.127 15:37:39 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:00.127 15:37:39 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:00.127 15:37:39 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:00.127 15:37:39 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:00.127 15:37:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:00.127 15:37:39 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:00.127 15:37:39 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:00.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:14:00.127 00:14:00.127 --- 10.0.0.2 ping statistics --- 00:14:00.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.127 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:14:00.127 15:37:39 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:00.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:14:00.127 00:14:00.127 --- 10.0.0.1 ping statistics --- 00:14:00.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.127 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:14:00.127 15:37:39 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.127 15:37:39 -- nvmf/common.sh@410 -- # return 0 00:14:00.127 15:37:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:00.127 15:37:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.127 15:37:39 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:00.127 15:37:39 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:00.127 15:37:39 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.127 15:37:39 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:00.127 15:37:39 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:00.127 15:37:39 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:00.127 15:37:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:00.127 15:37:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:00.127 15:37:39 -- common/autotest_common.sh@10 -- # set +x 00:14:00.127 15:37:39 -- nvmf/common.sh@469 -- # nvmfpid=2095296 00:14:00.127 15:37:39 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:00.127 15:37:39 -- nvmf/common.sh@470 -- # waitforlisten 2095296 00:14:00.127 15:37:39 -- common/autotest_common.sh@819 -- # '[' -z 2095296 ']' 00:14:00.127 15:37:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.127 15:37:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:00.127 15:37:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.127 15:37:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:00.127 15:37:39 -- common/autotest_common.sh@10 -- # set +x 00:14:00.127 [2024-07-10 15:37:39.382994] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:00.127 [2024-07-10 15:37:39.383082] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.127 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.127 [2024-07-10 15:37:39.448789] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:00.384 [2024-07-10 15:37:39.560739] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:00.384 [2024-07-10 15:37:39.560877] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.384 [2024-07-10 15:37:39.560893] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.384 [2024-07-10 15:37:39.560905] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.384 [2024-07-10 15:37:39.560943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.949 15:37:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:00.949 15:37:40 -- common/autotest_common.sh@852 -- # return 0 00:14:00.949 15:37:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:00.949 15:37:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:00.949 15:37:40 -- common/autotest_common.sh@10 -- # set +x 00:14:01.207 15:37:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.207 15:37:40 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:01.464 [2024-07-10 15:37:40.609772] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.464 15:37:40 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:01.464 15:37:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:01.464 15:37:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:01.464 15:37:40 -- common/autotest_common.sh@10 -- # set +x 00:14:01.464 ************************************ 00:14:01.464 START TEST lvs_grow_clean 00:14:01.464 ************************************ 00:14:01.464 15:37:40 -- common/autotest_common.sh@1104 -- # lvs_grow 00:14:01.464 15:37:40 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:01.464 15:37:40 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:01.464 15:37:40 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:01.464 15:37:40 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:01.464 15:37:40 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:01.464 15:37:40 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:01.464 15:37:40 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:01.464 15:37:40 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:01.464 15:37:40 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:01.722 15:37:40 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:01.722 15:37:40 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:01.980 15:37:41 -- target/nvmf_lvs_grow.sh@28 -- # lvs=764de1f2-8d1f-4d69-9222-1e9f0f91f5a6 00:14:01.980 15:37:41 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 764de1f2-8d1f-4d69-9222-1e9f0f91f5a6 00:14:01.980 15:37:41 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:02.237 15:37:41 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:02.238 15:37:41 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:02.238 15:37:41 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 764de1f2-8d1f-4d69-9222-1e9f0f91f5a6 lvol 150 00:14:02.495 15:37:41 -- target/nvmf_lvs_grow.sh@33 -- # lvol=03f4bb39-d4cc-4724-bd06-bfacea97b509 00:14:02.495 15:37:41 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:02.495 15:37:41 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:02.752 [2024-07-10 15:37:41.913659] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:02.752 [2024-07-10 15:37:41.913746] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:02.752 true 00:14:02.752 15:37:41 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 764de1f2-8d1f-4d69-9222-1e9f0f91f5a6 00:14:02.752 15:37:41 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:03.010 15:37:42 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:03.010 15:37:42 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:03.268 15:37:42 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 03f4bb39-d4cc-4724-bd06-bfacea97b509 00:14:03.268 15:37:42 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:03.526 [2024-07-10 15:37:42.860606] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:03.526 15:37:42 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:03.783 15:37:43 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2095786 00:14:03.783 15:37:43 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:03.783 15:37:43 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:03.783 15:37:43 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2095786 /var/tmp/bdevperf.sock 00:14:03.783 15:37:43 -- common/autotest_common.sh@819 -- # '[' -z 2095786 ']' 00:14:03.783 15:37:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:03.783 15:37:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:03.783 15:37:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:03.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:03.783 15:37:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:03.783 15:37:43 -- common/autotest_common.sh@10 -- # set +x 00:14:03.783 [2024-07-10 15:37:43.146555] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:03.783 [2024-07-10 15:37:43.146626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2095786 ] 00:14:04.040 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.040 [2024-07-10 15:37:43.209087] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.040 [2024-07-10 15:37:43.323305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.972 15:37:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:04.972 15:37:44 -- common/autotest_common.sh@852 -- # return 0 00:14:04.972 15:37:44 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:05.230 Nvme0n1 00:14:05.230 15:37:44 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:05.487 [ 00:14:05.487 { 00:14:05.487 "name": "Nvme0n1", 00:14:05.487 "aliases": [ 00:14:05.487 "03f4bb39-d4cc-4724-bd06-bfacea97b509" 00:14:05.487 ], 00:14:05.487 "product_name": "NVMe disk", 00:14:05.487 "block_size": 4096, 00:14:05.487 "num_blocks": 38912, 00:14:05.487 "uuid": "03f4bb39-d4cc-4724-bd06-bfacea97b509", 00:14:05.487 "assigned_rate_limits": { 00:14:05.487 "rw_ios_per_sec": 0, 00:14:05.487 "rw_mbytes_per_sec": 0, 00:14:05.487 "r_mbytes_per_sec": 0, 00:14:05.487 "w_mbytes_per_sec": 0 00:14:05.487 }, 00:14:05.487 "claimed": false, 00:14:05.487 "zoned": false, 00:14:05.487 "supported_io_types": { 00:14:05.487 "read": true, 00:14:05.487 "write": true, 00:14:05.487 "unmap": true, 00:14:05.487 "write_zeroes": true, 00:14:05.487 "flush": true, 00:14:05.487 "reset": true, 00:14:05.487 "compare": true, 00:14:05.487 "compare_and_write": true, 00:14:05.487 "abort": true, 00:14:05.487 "nvme_admin": true, 00:14:05.487 "nvme_io": true 00:14:05.487 }, 00:14:05.487 "driver_specific": { 00:14:05.487 "nvme": [ 00:14:05.487 { 00:14:05.487 "trid": { 00:14:05.487 "trtype": "TCP", 00:14:05.487 "adrfam": "IPv4", 00:14:05.487 "traddr": "10.0.0.2", 00:14:05.487 "trsvcid": "4420", 00:14:05.487 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:05.487 }, 00:14:05.487 "ctrlr_data": { 00:14:05.487 "cntlid": 1, 00:14:05.487 "vendor_id": "0x8086", 00:14:05.487 "model_number": "SPDK bdev Controller", 00:14:05.487 "serial_number": "SPDK0", 00:14:05.487 "firmware_revision": "24.01.1", 00:14:05.487 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:05.487 "oacs": { 00:14:05.487 "security": 0, 00:14:05.487 "format": 0, 00:14:05.487 "firmware": 0, 00:14:05.487 "ns_manage": 0 00:14:05.487 }, 00:14:05.487 "multi_ctrlr": true, 00:14:05.487 "ana_reporting": false 00:14:05.487 }, 00:14:05.487 "vs": { 00:14:05.487 "nvme_version": "1.3" 00:14:05.487 }, 00:14:05.487 "ns_data": { 00:14:05.487 "id": 1, 00:14:05.487 "can_share": true 00:14:05.487 } 00:14:05.487 } 00:14:05.487 ], 00:14:05.487 "mp_policy": "active_passive" 00:14:05.487 } 00:14:05.487 } 00:14:05.487 ] 00:14:05.487 15:37:44 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2096026 00:14:05.487 15:37:44 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:05.487 15:37:44 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:05.744 Running I/O for 10 seconds... 00:14:06.678 Latency(us) 00:14:06.678 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.678 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:06.678 Nvme0n1 : 1.00 14720.00 57.50 0.00 0.00 0.00 0.00 0.00 00:14:06.678 =================================================================================================================== 00:14:06.678 Total : 14720.00 57.50 0.00 0.00 0.00 0.00 0.00 00:14:06.678 00:14:07.611 15:37:46 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 764de1f2-8d1f-4d69-9222-1e9f0f91f5a6 00:14:07.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:07.611 Nvme0n1 : 2.00 14784.00 57.75 0.00 0.00 0.00 0.00 0.00 00:14:07.611 =================================================================================================================== 00:14:07.611 Total : 14784.00 57.75 0.00 0.00 0.00 0.00 0.00 00:14:07.611 00:14:07.868 true 00:14:07.868 15:37:47 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 764de1f2-8d1f-4d69-9222-1e9f0f91f5a6 00:14:07.868 15:37:47 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:08.126 15:37:47 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:08.126 15:37:47 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:08.126 15:37:47 -- target/nvmf_lvs_grow.sh@65 -- # wait 2096026 00:14:08.689 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:08.689 Nvme0n1 : 3.00 14891.67 58.17 0.00 0.00 0.00 0.00 0.00 00:14:08.689 =================================================================================================================== 00:14:08.689 Total : 14891.67 58.17 0.00 0.00 0.00 0.00 0.00 00:14:08.689 00:14:09.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:09.621 Nvme0n1 : 4.00 14928.50 58.31 0.00 0.00 0.00 0.00 0.00 00:14:09.621 =================================================================================================================== 00:14:09.621 Total : 14928.50 58.31 0.00 0.00 0.00 0.00 0.00 00:14:09.621 00:14:10.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:10.993 Nvme0n1 : 5.00 14963.60 58.45 0.00 0.00 0.00 0.00 0.00 00:14:10.993 =================================================================================================================== 00:14:10.993 Total : 14963.60 58.45 0.00 0.00 0.00 0.00 0.00 00:14:10.993 00:14:11.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:11.925 Nvme0n1 : 6.00 14968.50 58.47 0.00 0.00 0.00 0.00 0.00 00:14:11.925 =================================================================================================================== 00:14:11.925 Total : 14968.50 58.47 0.00 0.00 0.00 0.00 0.00 00:14:11.925 00:14:12.856 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:12.856 Nvme0n1 : 7.00 15003.71 58.61 0.00 0.00 0.00 0.00 0.00 00:14:12.856 =================================================================================================================== 00:14:12.856 Total : 15003.71 58.61 0.00 0.00 0.00 0.00 0.00 00:14:12.856 00:14:13.789 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:13.789 Nvme0n1 : 8.00 15032.25 58.72 0.00 0.00 0.00 0.00 0.00 00:14:13.789 =================================================================================================================== 00:14:13.789 Total : 15032.25 58.72 0.00 0.00 0.00 0.00 0.00 00:14:13.789 00:14:14.721 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:14.721 Nvme0n1 : 9.00 15054.44 58.81 0.00 0.00 0.00 0.00 0.00 00:14:14.721 =================================================================================================================== 00:14:14.721 Total : 15054.44 58.81 0.00 0.00 0.00 0.00 0.00 00:14:14.721 00:14:15.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:15.653 Nvme0n1 : 10.00 15072.20 58.88 0.00 0.00 0.00 0.00 0.00 00:14:15.653 =================================================================================================================== 00:14:15.653 Total : 15072.20 58.88 0.00 0.00 0.00 0.00 0.00 00:14:15.653 00:14:15.653 00:14:15.653 Latency(us) 00:14:15.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.653 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:15.653 Nvme0n1 : 10.01 15075.12 58.89 0.00 0.00 8485.95 2220.94 13010.11 00:14:15.653 =================================================================================================================== 00:14:15.653 Total : 15075.12 58.89 0.00 0.00 8485.95 2220.94 13010.11 00:14:15.653 0 00:14:15.653 15:37:55 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2095786 00:14:15.653 15:37:55 -- common/autotest_common.sh@926 -- # '[' -z 2095786 ']' 00:14:15.653 15:37:55 -- common/autotest_common.sh@930 -- # kill -0 2095786 00:14:15.653 15:37:55 -- common/autotest_common.sh@931 -- # uname 00:14:15.653 15:37:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:15.653 15:37:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2095786 00:14:15.909 15:37:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:15.910 15:37:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:15.910 15:37:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2095786' 00:14:15.910 killing process with pid 2095786 00:14:15.910 15:37:55 -- common/autotest_common.sh@945 -- # kill 2095786 00:14:15.910 Received shutdown signal, test time was about 10.000000 seconds 00:14:15.910 00:14:15.910 Latency(us) 00:14:15.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:15.910 =================================================================================================================== 00:14:15.910 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:15.910 15:37:55 -- common/autotest_common.sh@950 -- # wait 2095786 00:14:16.166 15:37:55 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:16.423 15:37:55 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 764de1f2-8d1f-4d69-9222-1e9f0f91f5a6 00:14:16.423 15:37:55 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:16.680 15:37:55 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:16.680 15:37:55 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:16.680 15:37:55 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:16.937 [2024-07-10 15:37:56.064058] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:16.937 15:37:56 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 764de1f2-8d1f-4d69-9222-1e9f0f91f5a6 00:14:16.937 15:37:56 -- common/autotest_common.sh@640 -- # local es=0 00:14:16.937 15:37:56 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 764de1f2-8d1f-4d69-9222-1e9f0f91f5a6 00:14:16.938 15:37:56 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:16.938 15:37:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:16.938 15:37:56 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:16.938 15:37:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:16.938 15:37:56 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:16.938 15:37:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:16.938 15:37:56 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:16.938 15:37:56 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:16.938 15:37:56 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 764de1f2-8d1f-4d69-9222-1e9f0f91f5a6 00:14:17.196 request: 00:14:17.196 { 00:14:17.196 "uuid": "764de1f2-8d1f-4d69-9222-1e9f0f91f5a6", 00:14:17.196 "method": "bdev_lvol_get_lvstores", 00:14:17.196 "req_id": 1 00:14:17.196 } 00:14:17.196 Got JSON-RPC error response 00:14:17.196 response: 00:14:17.196 { 00:14:17.196 "code": -19, 00:14:17.196 "message": "No such device" 00:14:17.196 } 00:14:17.196 15:37:56 -- common/autotest_common.sh@643 -- # es=1 00:14:17.196 15:37:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:17.196 15:37:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:17.196 15:37:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:17.196 15:37:56 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:17.196 aio_bdev 00:14:17.196 15:37:56 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 03f4bb39-d4cc-4724-bd06-bfacea97b509 00:14:17.196 15:37:56 -- common/autotest_common.sh@887 -- # local bdev_name=03f4bb39-d4cc-4724-bd06-bfacea97b509 00:14:17.196 15:37:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:17.196 15:37:56 -- common/autotest_common.sh@889 -- # local i 00:14:17.196 15:37:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:17.196 15:37:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:17.196 15:37:56 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:17.454 15:37:56 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 03f4bb39-d4cc-4724-bd06-bfacea97b509 -t 2000 00:14:17.711 [ 00:14:17.711 { 00:14:17.711 "name": "03f4bb39-d4cc-4724-bd06-bfacea97b509", 00:14:17.711 "aliases": [ 00:14:17.711 "lvs/lvol" 00:14:17.711 ], 00:14:17.711 "product_name": "Logical Volume", 00:14:17.711 "block_size": 4096, 00:14:17.711 "num_blocks": 38912, 00:14:17.711 "uuid": "03f4bb39-d4cc-4724-bd06-bfacea97b509", 00:14:17.711 "assigned_rate_limits": { 00:14:17.711 "rw_ios_per_sec": 0, 00:14:17.711 "rw_mbytes_per_sec": 0, 00:14:17.711 "r_mbytes_per_sec": 0, 00:14:17.711 "w_mbytes_per_sec": 0 00:14:17.711 }, 00:14:17.711 "claimed": false, 00:14:17.711 "zoned": false, 00:14:17.711 "supported_io_types": { 00:14:17.711 "read": true, 00:14:17.711 "write": true, 00:14:17.711 "unmap": true, 00:14:17.711 "write_zeroes": true, 00:14:17.711 "flush": false, 00:14:17.711 "reset": true, 00:14:17.711 "compare": false, 00:14:17.711 "compare_and_write": false, 00:14:17.711 "abort": false, 00:14:17.711 "nvme_admin": false, 00:14:17.711 "nvme_io": false 00:14:17.711 }, 00:14:17.711 "driver_specific": { 00:14:17.711 "lvol": { 00:14:17.711 "lvol_store_uuid": "764de1f2-8d1f-4d69-9222-1e9f0f91f5a6", 00:14:17.711 "base_bdev": "aio_bdev", 00:14:17.711 "thin_provision": false, 00:14:17.711 "snapshot": false, 00:14:17.711 "clone": false, 00:14:17.711 "esnap_clone": false 00:14:17.711 } 00:14:17.711 } 00:14:17.711 } 00:14:17.711 ] 00:14:17.711 15:37:57 -- common/autotest_common.sh@895 -- # return 0 00:14:17.711 15:37:57 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 764de1f2-8d1f-4d69-9222-1e9f0f91f5a6 00:14:17.711 15:37:57 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:17.969 15:37:57 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:17.969 15:37:57 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 764de1f2-8d1f-4d69-9222-1e9f0f91f5a6 00:14:17.969 15:37:57 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:18.226 15:37:57 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:18.226 15:37:57 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 03f4bb39-d4cc-4724-bd06-bfacea97b509 00:14:18.484 15:37:57 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 764de1f2-8d1f-4d69-9222-1e9f0f91f5a6 00:14:18.742 15:37:58 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:19.000 15:37:58 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:19.000 00:14:19.000 real 0m17.622s 00:14:19.000 user 0m17.368s 00:14:19.000 sys 0m1.804s 00:14:19.000 15:37:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:19.000 15:37:58 -- common/autotest_common.sh@10 -- # set +x 00:14:19.000 ************************************ 00:14:19.000 END TEST lvs_grow_clean 00:14:19.000 ************************************ 00:14:19.000 15:37:58 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:19.000 15:37:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:19.000 15:37:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:19.000 15:37:58 -- common/autotest_common.sh@10 -- # set +x 00:14:19.000 ************************************ 00:14:19.000 START TEST lvs_grow_dirty 00:14:19.000 ************************************ 00:14:19.000 15:37:58 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:14:19.000 15:37:58 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:19.000 15:37:58 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:19.000 15:37:58 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:19.000 15:37:58 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:19.000 15:37:58 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:19.000 15:37:58 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:19.000 15:37:58 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:19.000 15:37:58 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:19.000 15:37:58 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:19.258 15:37:58 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:19.258 15:37:58 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:19.516 15:37:58 -- target/nvmf_lvs_grow.sh@28 -- # lvs=840bebf2-c6df-4d28-9765-1336da212a93 00:14:19.516 15:37:58 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 840bebf2-c6df-4d28-9765-1336da212a93 00:14:19.516 15:37:58 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:19.773 15:37:59 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:19.774 15:37:59 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:19.774 15:37:59 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 840bebf2-c6df-4d28-9765-1336da212a93 lvol 150 00:14:20.032 15:37:59 -- target/nvmf_lvs_grow.sh@33 -- # lvol=c51277df-410e-4c50-8726-049f0105b840 00:14:20.032 15:37:59 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:20.032 15:37:59 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:20.290 [2024-07-10 15:37:59.506570] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:20.290 [2024-07-10 15:37:59.506654] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:20.290 true 00:14:20.290 15:37:59 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 840bebf2-c6df-4d28-9765-1336da212a93 00:14:20.290 15:37:59 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:20.548 15:37:59 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:20.548 15:37:59 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:20.806 15:38:00 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c51277df-410e-4c50-8726-049f0105b840 00:14:21.064 15:38:00 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:21.322 15:38:00 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:21.581 15:38:00 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2097986 00:14:21.581 15:38:00 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:21.581 15:38:00 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:21.581 15:38:00 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2097986 /var/tmp/bdevperf.sock 00:14:21.581 15:38:00 -- common/autotest_common.sh@819 -- # '[' -z 2097986 ']' 00:14:21.581 15:38:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:21.581 15:38:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:21.581 15:38:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:21.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:21.581 15:38:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:21.581 15:38:00 -- common/autotest_common.sh@10 -- # set +x 00:14:21.581 [2024-07-10 15:38:00.765876] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:21.581 [2024-07-10 15:38:00.765960] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2097986 ] 00:14:21.581 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.581 [2024-07-10 15:38:00.827580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.581 [2024-07-10 15:38:00.944027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:22.638 15:38:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:22.638 15:38:01 -- common/autotest_common.sh@852 -- # return 0 00:14:22.638 15:38:01 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:22.896 Nvme0n1 00:14:22.896 15:38:02 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:23.154 [ 00:14:23.154 { 00:14:23.154 "name": "Nvme0n1", 00:14:23.154 "aliases": [ 00:14:23.154 "c51277df-410e-4c50-8726-049f0105b840" 00:14:23.154 ], 00:14:23.154 "product_name": "NVMe disk", 00:14:23.154 "block_size": 4096, 00:14:23.154 "num_blocks": 38912, 00:14:23.154 "uuid": "c51277df-410e-4c50-8726-049f0105b840", 00:14:23.155 "assigned_rate_limits": { 00:14:23.155 "rw_ios_per_sec": 0, 00:14:23.155 "rw_mbytes_per_sec": 0, 00:14:23.155 "r_mbytes_per_sec": 0, 00:14:23.155 "w_mbytes_per_sec": 0 00:14:23.155 }, 00:14:23.155 "claimed": false, 00:14:23.155 "zoned": false, 00:14:23.155 "supported_io_types": { 00:14:23.155 "read": true, 00:14:23.155 "write": true, 00:14:23.155 "unmap": true, 00:14:23.155 "write_zeroes": true, 00:14:23.155 "flush": true, 00:14:23.155 "reset": true, 00:14:23.155 "compare": true, 00:14:23.155 "compare_and_write": true, 00:14:23.155 "abort": true, 00:14:23.155 "nvme_admin": true, 00:14:23.155 "nvme_io": true 00:14:23.155 }, 00:14:23.155 "driver_specific": { 00:14:23.155 "nvme": [ 00:14:23.155 { 00:14:23.155 "trid": { 00:14:23.155 "trtype": "TCP", 00:14:23.155 "adrfam": "IPv4", 00:14:23.155 "traddr": "10.0.0.2", 00:14:23.155 "trsvcid": "4420", 00:14:23.155 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:14:23.155 }, 00:14:23.155 "ctrlr_data": { 00:14:23.155 "cntlid": 1, 00:14:23.155 "vendor_id": "0x8086", 00:14:23.155 "model_number": "SPDK bdev Controller", 00:14:23.155 "serial_number": "SPDK0", 00:14:23.155 "firmware_revision": "24.01.1", 00:14:23.155 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:23.155 "oacs": { 00:14:23.155 "security": 0, 00:14:23.155 "format": 0, 00:14:23.155 "firmware": 0, 00:14:23.155 "ns_manage": 0 00:14:23.155 }, 00:14:23.155 "multi_ctrlr": true, 00:14:23.155 "ana_reporting": false 00:14:23.155 }, 00:14:23.155 "vs": { 00:14:23.155 "nvme_version": "1.3" 00:14:23.155 }, 00:14:23.155 "ns_data": { 00:14:23.155 "id": 1, 00:14:23.155 "can_share": true 00:14:23.155 } 00:14:23.155 } 00:14:23.155 ], 00:14:23.155 "mp_policy": "active_passive" 00:14:23.155 } 00:14:23.155 } 00:14:23.155 ] 00:14:23.155 15:38:02 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2098258 00:14:23.155 15:38:02 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:23.155 15:38:02 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:23.413 Running I/O for 10 seconds... 00:14:24.346 Latency(us) 00:14:24.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:24.346 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:24.346 Nvme0n1 : 1.00 14336.00 56.00 0.00 0.00 0.00 0.00 0.00 00:14:24.346 =================================================================================================================== 00:14:24.346 Total : 14336.00 56.00 0.00 0.00 0.00 0.00 0.00 00:14:24.346 00:14:25.279 15:38:04 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 840bebf2-c6df-4d28-9765-1336da212a93 00:14:25.279 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:25.279 Nvme0n1 : 2.00 14536.50 56.78 0.00 0.00 0.00 0.00 0.00 00:14:25.279 =================================================================================================================== 00:14:25.279 Total : 14536.50 56.78 0.00 0.00 0.00 0.00 0.00 00:14:25.279 00:14:25.538 true 00:14:25.538 15:38:04 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 840bebf2-c6df-4d28-9765-1336da212a93 00:14:25.538 15:38:04 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:25.796 15:38:05 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:25.796 15:38:05 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:25.796 15:38:05 -- target/nvmf_lvs_grow.sh@65 -- # wait 2098258 00:14:26.362 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:26.362 Nvme0n1 : 3.00 14634.67 57.17 0.00 0.00 0.00 0.00 0.00 00:14:26.362 =================================================================================================================== 00:14:26.362 Total : 14634.67 57.17 0.00 0.00 0.00 0.00 0.00 00:14:26.362 00:14:27.296 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.296 Nvme0n1 : 4.00 14720.00 57.50 0.00 0.00 0.00 0.00 0.00 00:14:27.296 =================================================================================================================== 00:14:27.296 Total : 14720.00 57.50 0.00 0.00 0.00 0.00 0.00 00:14:27.296 00:14:28.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.671 Nvme0n1 : 5.00 14810.20 57.85 0.00 0.00 0.00 0.00 0.00 00:14:28.671 =================================================================================================================== 00:14:28.671 Total : 14810.20 57.85 0.00 0.00 0.00 0.00 0.00 00:14:28.671 00:14:29.606 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.606 Nvme0n1 : 6.00 14869.17 58.08 0.00 0.00 0.00 0.00 0.00 00:14:29.606 =================================================================================================================== 00:14:29.606 Total : 14869.17 58.08 0.00 0.00 0.00 0.00 0.00 00:14:29.607 00:14:30.540 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.541 Nvme0n1 : 7.00 14930.71 58.32 0.00 0.00 0.00 0.00 0.00 00:14:30.541 =================================================================================================================== 00:14:30.541 Total : 14930.71 58.32 0.00 0.00 0.00 0.00 0.00 00:14:30.541 00:14:31.475 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:31.475 Nvme0n1 : 8.00 15016.38 58.66 0.00 0.00 0.00 0.00 0.00 00:14:31.476 =================================================================================================================== 00:14:31.476 Total : 15016.38 58.66 0.00 0.00 0.00 0.00 0.00 00:14:31.476 00:14:32.412 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.412 Nvme0n1 : 9.00 15047.44 58.78 0.00 0.00 0.00 0.00 0.00 00:14:32.412 =================================================================================================================== 00:14:32.412 Total : 15047.44 58.78 0.00 0.00 0.00 0.00 0.00 00:14:32.412 00:14:33.346 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.346 Nvme0n1 : 10.00 15104.30 59.00 0.00 0.00 0.00 0.00 0.00 00:14:33.346 =================================================================================================================== 00:14:33.346 Total : 15104.30 59.00 0.00 0.00 0.00 0.00 0.00 00:14:33.346 00:14:33.346 00:14:33.346 Latency(us) 00:14:33.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.346 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.346 Nvme0n1 : 10.01 15105.62 59.01 0.00 0.00 8467.79 5218.61 17864.63 00:14:33.346 =================================================================================================================== 00:14:33.346 Total : 15105.62 59.01 0.00 0.00 8467.79 5218.61 17864.63 00:14:33.346 0 00:14:33.346 15:38:12 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2097986 00:14:33.346 15:38:12 -- common/autotest_common.sh@926 -- # '[' -z 2097986 ']' 00:14:33.346 15:38:12 -- common/autotest_common.sh@930 -- # kill -0 2097986 00:14:33.346 15:38:12 -- common/autotest_common.sh@931 -- # uname 00:14:33.346 15:38:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:33.346 15:38:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2097986 00:14:33.346 15:38:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:33.346 15:38:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:33.346 15:38:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2097986' 00:14:33.346 killing process with pid 2097986 00:14:33.346 15:38:12 -- common/autotest_common.sh@945 -- # kill 2097986 00:14:33.346 Received shutdown signal, test time was about 10.000000 seconds 00:14:33.346 00:14:33.346 Latency(us) 00:14:33.346 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.346 =================================================================================================================== 00:14:33.346 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:33.346 15:38:12 -- common/autotest_common.sh@950 -- # wait 2097986 00:14:33.605 15:38:12 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:33.862 15:38:13 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 840bebf2-c6df-4d28-9765-1336da212a93 00:14:33.862 15:38:13 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:34.119 15:38:13 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:34.119 15:38:13 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:34.119 15:38:13 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 2095296 00:14:34.119 15:38:13 -- target/nvmf_lvs_grow.sh@74 -- # wait 2095296 00:14:34.119 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 2095296 Killed "${NVMF_APP[@]}" "$@" 00:14:34.119 15:38:13 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:34.119 15:38:13 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:34.119 15:38:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:34.119 15:38:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:34.119 15:38:13 -- common/autotest_common.sh@10 -- # set +x 00:14:34.119 15:38:13 -- nvmf/common.sh@469 -- # nvmfpid=2099509 00:14:34.119 15:38:13 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:34.119 15:38:13 -- nvmf/common.sh@470 -- # waitforlisten 2099509 00:14:34.119 15:38:13 -- common/autotest_common.sh@819 -- # '[' -z 2099509 ']' 00:14:34.119 15:38:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:34.119 15:38:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:34.119 15:38:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:34.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:34.119 15:38:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:34.119 15:38:13 -- common/autotest_common.sh@10 -- # set +x 00:14:34.376 [2024-07-10 15:38:13.521585] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:34.376 [2024-07-10 15:38:13.521675] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:34.376 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.376 [2024-07-10 15:38:13.586484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.376 [2024-07-10 15:38:13.694643] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:34.376 [2024-07-10 15:38:13.694788] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:34.376 [2024-07-10 15:38:13.694805] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:34.376 [2024-07-10 15:38:13.694818] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:34.376 [2024-07-10 15:38:13.694859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.310 15:38:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:35.310 15:38:14 -- common/autotest_common.sh@852 -- # return 0 00:14:35.310 15:38:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:35.310 15:38:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:35.310 15:38:14 -- common/autotest_common.sh@10 -- # set +x 00:14:35.310 15:38:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.310 15:38:14 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:35.568 [2024-07-10 15:38:14.702838] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:35.568 [2024-07-10 15:38:14.702984] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:35.568 [2024-07-10 15:38:14.703032] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:35.568 15:38:14 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:35.568 15:38:14 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev c51277df-410e-4c50-8726-049f0105b840 00:14:35.568 15:38:14 -- common/autotest_common.sh@887 -- # local bdev_name=c51277df-410e-4c50-8726-049f0105b840 00:14:35.568 15:38:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:35.568 15:38:14 -- common/autotest_common.sh@889 -- # local i 00:14:35.568 15:38:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:35.568 15:38:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:35.568 15:38:14 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:35.826 15:38:14 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c51277df-410e-4c50-8726-049f0105b840 -t 2000 00:14:35.826 [ 00:14:35.826 { 00:14:35.826 "name": "c51277df-410e-4c50-8726-049f0105b840", 00:14:35.826 "aliases": [ 00:14:35.826 "lvs/lvol" 00:14:35.826 ], 00:14:35.826 "product_name": "Logical Volume", 00:14:35.826 "block_size": 4096, 00:14:35.826 "num_blocks": 38912, 00:14:35.826 "uuid": "c51277df-410e-4c50-8726-049f0105b840", 00:14:35.826 "assigned_rate_limits": { 00:14:35.826 "rw_ios_per_sec": 0, 00:14:35.826 "rw_mbytes_per_sec": 0, 00:14:35.826 "r_mbytes_per_sec": 0, 00:14:35.826 "w_mbytes_per_sec": 0 00:14:35.826 }, 00:14:35.826 "claimed": false, 00:14:35.826 "zoned": false, 00:14:35.826 "supported_io_types": { 00:14:35.826 "read": true, 00:14:35.826 "write": true, 00:14:35.826 "unmap": true, 00:14:35.826 "write_zeroes": true, 00:14:35.826 "flush": false, 00:14:35.826 "reset": true, 00:14:35.826 "compare": false, 00:14:35.826 "compare_and_write": false, 00:14:35.826 "abort": false, 00:14:35.826 "nvme_admin": false, 00:14:35.826 "nvme_io": false 00:14:35.826 }, 00:14:35.826 "driver_specific": { 00:14:35.826 "lvol": { 00:14:35.826 "lvol_store_uuid": "840bebf2-c6df-4d28-9765-1336da212a93", 00:14:35.826 "base_bdev": "aio_bdev", 00:14:35.826 "thin_provision": false, 00:14:35.826 "snapshot": false, 00:14:35.826 "clone": false, 00:14:35.826 "esnap_clone": false 00:14:35.826 } 00:14:35.826 } 00:14:35.826 } 00:14:35.826 ] 00:14:35.826 15:38:15 -- common/autotest_common.sh@895 -- # return 0 00:14:35.826 15:38:15 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 840bebf2-c6df-4d28-9765-1336da212a93 00:14:35.826 15:38:15 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:36.392 15:38:15 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:36.392 15:38:15 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 840bebf2-c6df-4d28-9765-1336da212a93 00:14:36.392 15:38:15 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:36.392 15:38:15 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:36.392 15:38:15 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:36.650 [2024-07-10 15:38:15.928039] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:36.650 15:38:15 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 840bebf2-c6df-4d28-9765-1336da212a93 00:14:36.650 15:38:15 -- common/autotest_common.sh@640 -- # local es=0 00:14:36.650 15:38:15 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 840bebf2-c6df-4d28-9765-1336da212a93 00:14:36.650 15:38:15 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.650 15:38:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:36.650 15:38:15 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.650 15:38:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:36.650 15:38:15 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.650 15:38:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:36.650 15:38:15 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:36.651 15:38:15 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:36.651 15:38:15 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 840bebf2-c6df-4d28-9765-1336da212a93 00:14:36.909 request: 00:14:36.909 { 00:14:36.909 "uuid": "840bebf2-c6df-4d28-9765-1336da212a93", 00:14:36.909 "method": "bdev_lvol_get_lvstores", 00:14:36.909 "req_id": 1 00:14:36.909 } 00:14:36.909 Got JSON-RPC error response 00:14:36.909 response: 00:14:36.909 { 00:14:36.909 "code": -19, 00:14:36.909 "message": "No such device" 00:14:36.909 } 00:14:36.909 15:38:16 -- common/autotest_common.sh@643 -- # es=1 00:14:36.909 15:38:16 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:36.909 15:38:16 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:36.909 15:38:16 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:36.909 15:38:16 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:37.166 aio_bdev 00:14:37.166 15:38:16 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev c51277df-410e-4c50-8726-049f0105b840 00:14:37.166 15:38:16 -- common/autotest_common.sh@887 -- # local bdev_name=c51277df-410e-4c50-8726-049f0105b840 00:14:37.167 15:38:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:37.167 15:38:16 -- common/autotest_common.sh@889 -- # local i 00:14:37.167 15:38:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:37.167 15:38:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:37.167 15:38:16 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:37.424 15:38:16 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c51277df-410e-4c50-8726-049f0105b840 -t 2000 00:14:37.682 [ 00:14:37.682 { 00:14:37.682 "name": "c51277df-410e-4c50-8726-049f0105b840", 00:14:37.682 "aliases": [ 00:14:37.682 "lvs/lvol" 00:14:37.682 ], 00:14:37.682 "product_name": "Logical Volume", 00:14:37.682 "block_size": 4096, 00:14:37.682 "num_blocks": 38912, 00:14:37.682 "uuid": "c51277df-410e-4c50-8726-049f0105b840", 00:14:37.682 "assigned_rate_limits": { 00:14:37.682 "rw_ios_per_sec": 0, 00:14:37.682 "rw_mbytes_per_sec": 0, 00:14:37.682 "r_mbytes_per_sec": 0, 00:14:37.682 "w_mbytes_per_sec": 0 00:14:37.682 }, 00:14:37.682 "claimed": false, 00:14:37.682 "zoned": false, 00:14:37.682 "supported_io_types": { 00:14:37.682 "read": true, 00:14:37.682 "write": true, 00:14:37.682 "unmap": true, 00:14:37.682 "write_zeroes": true, 00:14:37.682 "flush": false, 00:14:37.682 "reset": true, 00:14:37.682 "compare": false, 00:14:37.682 "compare_and_write": false, 00:14:37.682 "abort": false, 00:14:37.683 "nvme_admin": false, 00:14:37.683 "nvme_io": false 00:14:37.683 }, 00:14:37.683 "driver_specific": { 00:14:37.683 "lvol": { 00:14:37.683 "lvol_store_uuid": "840bebf2-c6df-4d28-9765-1336da212a93", 00:14:37.683 "base_bdev": "aio_bdev", 00:14:37.683 "thin_provision": false, 00:14:37.683 "snapshot": false, 00:14:37.683 "clone": false, 00:14:37.683 "esnap_clone": false 00:14:37.683 } 00:14:37.683 } 00:14:37.683 } 00:14:37.683 ] 00:14:37.683 15:38:17 -- common/autotest_common.sh@895 -- # return 0 00:14:37.683 15:38:17 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 840bebf2-c6df-4d28-9765-1336da212a93 00:14:37.683 15:38:17 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:37.941 15:38:17 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:37.941 15:38:17 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 840bebf2-c6df-4d28-9765-1336da212a93 00:14:37.941 15:38:17 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:38.199 15:38:17 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:38.199 15:38:17 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c51277df-410e-4c50-8726-049f0105b840 00:14:38.457 15:38:17 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 840bebf2-c6df-4d28-9765-1336da212a93 00:14:38.715 15:38:18 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:38.973 15:38:18 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:14:38.973 00:14:38.973 real 0m19.993s 00:14:38.973 user 0m49.885s 00:14:38.973 sys 0m4.954s 00:14:38.973 15:38:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:38.973 15:38:18 -- common/autotest_common.sh@10 -- # set +x 00:14:38.973 ************************************ 00:14:38.973 END TEST lvs_grow_dirty 00:14:38.973 ************************************ 00:14:38.973 15:38:18 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:38.973 15:38:18 -- common/autotest_common.sh@796 -- # type=--id 00:14:38.973 15:38:18 -- common/autotest_common.sh@797 -- # id=0 00:14:38.973 15:38:18 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:14:38.973 15:38:18 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:38.973 15:38:18 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:14:38.973 15:38:18 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:14:38.973 15:38:18 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:14:38.973 15:38:18 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:38.973 nvmf_trace.0 00:14:38.973 15:38:18 -- common/autotest_common.sh@811 -- # return 0 00:14:38.973 15:38:18 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:38.973 15:38:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:38.973 15:38:18 -- nvmf/common.sh@116 -- # sync 00:14:38.973 15:38:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:38.973 15:38:18 -- nvmf/common.sh@119 -- # set +e 00:14:38.973 15:38:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:38.973 15:38:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:38.973 rmmod nvme_tcp 00:14:39.232 rmmod nvme_fabrics 00:14:39.232 rmmod nvme_keyring 00:14:39.232 15:38:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:39.232 15:38:18 -- nvmf/common.sh@123 -- # set -e 00:14:39.232 15:38:18 -- nvmf/common.sh@124 -- # return 0 00:14:39.232 15:38:18 -- nvmf/common.sh@477 -- # '[' -n 2099509 ']' 00:14:39.232 15:38:18 -- nvmf/common.sh@478 -- # killprocess 2099509 00:14:39.232 15:38:18 -- common/autotest_common.sh@926 -- # '[' -z 2099509 ']' 00:14:39.232 15:38:18 -- common/autotest_common.sh@930 -- # kill -0 2099509 00:14:39.232 15:38:18 -- common/autotest_common.sh@931 -- # uname 00:14:39.232 15:38:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:39.232 15:38:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2099509 00:14:39.232 15:38:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:39.232 15:38:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:39.232 15:38:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2099509' 00:14:39.232 killing process with pid 2099509 00:14:39.232 15:38:18 -- common/autotest_common.sh@945 -- # kill 2099509 00:14:39.232 15:38:18 -- common/autotest_common.sh@950 -- # wait 2099509 00:14:39.490 15:38:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:39.490 15:38:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:39.490 15:38:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:39.490 15:38:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:39.490 15:38:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:39.490 15:38:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:39.490 15:38:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:39.490 15:38:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.391 15:38:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:41.391 00:14:41.391 real 0m43.650s 00:14:41.391 user 1m13.726s 00:14:41.391 sys 0m8.654s 00:14:41.391 15:38:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:41.391 15:38:20 -- common/autotest_common.sh@10 -- # set +x 00:14:41.391 ************************************ 00:14:41.391 END TEST nvmf_lvs_grow 00:14:41.391 ************************************ 00:14:41.649 15:38:20 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:41.649 15:38:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:41.649 15:38:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:41.649 15:38:20 -- common/autotest_common.sh@10 -- # set +x 00:14:41.649 ************************************ 00:14:41.649 START TEST nvmf_bdev_io_wait 00:14:41.649 ************************************ 00:14:41.649 15:38:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:41.649 * Looking for test storage... 00:14:41.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:41.649 15:38:20 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:41.649 15:38:20 -- nvmf/common.sh@7 -- # uname -s 00:14:41.649 15:38:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:41.649 15:38:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:41.649 15:38:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:41.649 15:38:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:41.649 15:38:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:41.649 15:38:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:41.649 15:38:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:41.649 15:38:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:41.649 15:38:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:41.649 15:38:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:41.649 15:38:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:41.649 15:38:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:41.649 15:38:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:41.649 15:38:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:41.649 15:38:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:41.649 15:38:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:41.649 15:38:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:41.649 15:38:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:41.649 15:38:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:41.649 15:38:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.649 15:38:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.649 15:38:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.649 15:38:20 -- paths/export.sh@5 -- # export PATH 00:14:41.649 15:38:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:41.649 15:38:20 -- nvmf/common.sh@46 -- # : 0 00:14:41.649 15:38:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:41.649 15:38:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:41.649 15:38:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:41.649 15:38:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:41.649 15:38:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:41.649 15:38:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:41.649 15:38:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:41.649 15:38:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:41.649 15:38:20 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:41.649 15:38:20 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:41.649 15:38:20 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:41.649 15:38:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:41.649 15:38:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:41.649 15:38:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:41.649 15:38:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:41.649 15:38:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:41.649 15:38:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.649 15:38:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.649 15:38:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:41.649 15:38:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:41.649 15:38:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:41.649 15:38:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:41.649 15:38:20 -- common/autotest_common.sh@10 -- # set +x 00:14:43.549 15:38:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:43.549 15:38:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:43.549 15:38:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:43.549 15:38:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:43.549 15:38:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:43.549 15:38:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:43.549 15:38:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:43.549 15:38:22 -- nvmf/common.sh@294 -- # net_devs=() 00:14:43.549 15:38:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:43.549 15:38:22 -- nvmf/common.sh@295 -- # e810=() 00:14:43.549 15:38:22 -- nvmf/common.sh@295 -- # local -ga e810 00:14:43.549 15:38:22 -- nvmf/common.sh@296 -- # x722=() 00:14:43.549 15:38:22 -- nvmf/common.sh@296 -- # local -ga x722 00:14:43.549 15:38:22 -- nvmf/common.sh@297 -- # mlx=() 00:14:43.549 15:38:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:43.549 15:38:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:43.549 15:38:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:43.549 15:38:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:43.549 15:38:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:43.549 15:38:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:43.549 15:38:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:43.549 15:38:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:43.549 15:38:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:43.549 15:38:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:43.549 15:38:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:43.549 15:38:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:43.549 15:38:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:43.549 15:38:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:43.549 15:38:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:43.549 15:38:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:43.549 15:38:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:43.549 15:38:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:43.549 15:38:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:43.549 15:38:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:43.549 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:43.549 15:38:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:43.549 15:38:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:43.549 15:38:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.549 15:38:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.549 15:38:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:43.549 15:38:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:43.549 15:38:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:43.549 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:43.549 15:38:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:43.549 15:38:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:43.549 15:38:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:43.549 15:38:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:43.549 15:38:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:43.549 15:38:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:43.549 15:38:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:43.549 15:38:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:43.549 15:38:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:43.549 15:38:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.549 15:38:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:43.549 15:38:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.549 15:38:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:43.549 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:43.549 15:38:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.549 15:38:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:43.549 15:38:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:43.549 15:38:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:43.549 15:38:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:43.549 15:38:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:43.549 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:43.549 15:38:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:43.549 15:38:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:43.549 15:38:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:43.549 15:38:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:43.549 15:38:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:43.549 15:38:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:43.549 15:38:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:43.549 15:38:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:43.549 15:38:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:43.549 15:38:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:43.549 15:38:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:43.549 15:38:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:43.549 15:38:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:43.549 15:38:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:43.549 15:38:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:43.549 15:38:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:43.549 15:38:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:43.549 15:38:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:43.549 15:38:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:43.549 15:38:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:43.549 15:38:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:43.549 15:38:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:43.549 15:38:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:43.549 15:38:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:43.549 15:38:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:43.549 15:38:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:43.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:43.821 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:14:43.821 00:14:43.821 --- 10.0.0.2 ping statistics --- 00:14:43.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.821 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:14:43.821 15:38:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:43.821 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:43.821 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:14:43.821 00:14:43.821 --- 10.0.0.1 ping statistics --- 00:14:43.821 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:43.821 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:14:43.821 15:38:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:43.821 15:38:22 -- nvmf/common.sh@410 -- # return 0 00:14:43.821 15:38:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:43.821 15:38:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:43.821 15:38:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:43.821 15:38:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:43.821 15:38:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:43.821 15:38:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:43.821 15:38:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:43.821 15:38:22 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:14:43.821 15:38:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:43.821 15:38:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:43.821 15:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:43.821 15:38:22 -- nvmf/common.sh@469 -- # nvmfpid=2102179 00:14:43.821 15:38:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:14:43.821 15:38:22 -- nvmf/common.sh@470 -- # waitforlisten 2102179 00:14:43.821 15:38:22 -- common/autotest_common.sh@819 -- # '[' -z 2102179 ']' 00:14:43.821 15:38:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.821 15:38:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:43.821 15:38:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.821 15:38:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:43.821 15:38:22 -- common/autotest_common.sh@10 -- # set +x 00:14:43.821 [2024-07-10 15:38:22.996753] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:43.821 [2024-07-10 15:38:22.996819] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:43.821 EAL: No free 2048 kB hugepages reported on node 1 00:14:43.821 [2024-07-10 15:38:23.061888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:43.821 [2024-07-10 15:38:23.179459] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:43.821 [2024-07-10 15:38:23.179608] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:43.821 [2024-07-10 15:38:23.179628] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:43.821 [2024-07-10 15:38:23.179647] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:43.821 [2024-07-10 15:38:23.179705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.821 [2024-07-10 15:38:23.179749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:43.821 [2024-07-10 15:38:23.180104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:43.821 [2024-07-10 15:38:23.180107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.078 15:38:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:44.078 15:38:23 -- common/autotest_common.sh@852 -- # return 0 00:14:44.078 15:38:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:44.078 15:38:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:44.078 15:38:23 -- common/autotest_common.sh@10 -- # set +x 00:14:44.079 15:38:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:14:44.079 15:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.079 15:38:23 -- common/autotest_common.sh@10 -- # set +x 00:14:44.079 15:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:14:44.079 15:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.079 15:38:23 -- common/autotest_common.sh@10 -- # set +x 00:14:44.079 15:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:44.079 15:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.079 15:38:23 -- common/autotest_common.sh@10 -- # set +x 00:14:44.079 [2024-07-10 15:38:23.352374] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:44.079 15:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:44.079 15:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.079 15:38:23 -- common/autotest_common.sh@10 -- # set +x 00:14:44.079 Malloc0 00:14:44.079 15:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:44.079 15:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.079 15:38:23 -- common/autotest_common.sh@10 -- # set +x 00:14:44.079 15:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:44.079 15:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.079 15:38:23 -- common/autotest_common.sh@10 -- # set +x 00:14:44.079 15:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:44.079 15:38:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:44.079 15:38:23 -- common/autotest_common.sh@10 -- # set +x 00:14:44.079 [2024-07-10 15:38:23.417024] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:44.079 15:38:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2102207 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@30 -- # READ_PID=2102209 00:14:44.079 15:38:23 -- nvmf/common.sh@520 -- # config=() 00:14:44.079 15:38:23 -- nvmf/common.sh@520 -- # local subsystem config 00:14:44.079 15:38:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:44.079 15:38:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:44.079 { 00:14:44.079 "params": { 00:14:44.079 "name": "Nvme$subsystem", 00:14:44.079 "trtype": "$TEST_TRANSPORT", 00:14:44.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:44.079 "adrfam": "ipv4", 00:14:44.079 "trsvcid": "$NVMF_PORT", 00:14:44.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:44.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:44.079 "hdgst": ${hdgst:-false}, 00:14:44.079 "ddgst": ${ddgst:-false} 00:14:44.079 }, 00:14:44.079 "method": "bdev_nvme_attach_controller" 00:14:44.079 } 00:14:44.079 EOF 00:14:44.079 )") 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2102211 00:14:44.079 15:38:23 -- nvmf/common.sh@520 -- # config=() 00:14:44.079 15:38:23 -- nvmf/common.sh@520 -- # local subsystem config 00:14:44.079 15:38:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:44.079 15:38:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:44.079 { 00:14:44.079 "params": { 00:14:44.079 "name": "Nvme$subsystem", 00:14:44.079 "trtype": "$TEST_TRANSPORT", 00:14:44.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:44.079 "adrfam": "ipv4", 00:14:44.079 "trsvcid": "$NVMF_PORT", 00:14:44.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:44.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:44.079 "hdgst": ${hdgst:-false}, 00:14:44.079 "ddgst": ${ddgst:-false} 00:14:44.079 }, 00:14:44.079 "method": "bdev_nvme_attach_controller" 00:14:44.079 } 00:14:44.079 EOF 00:14:44.079 )") 00:14:44.079 15:38:23 -- nvmf/common.sh@542 -- # cat 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:14:44.079 15:38:23 -- nvmf/common.sh@520 -- # config=() 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2102214 00:14:44.079 15:38:23 -- nvmf/common.sh@520 -- # local subsystem config 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@35 -- # sync 00:14:44.079 15:38:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:44.079 15:38:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:44.079 { 00:14:44.079 "params": { 00:14:44.079 "name": "Nvme$subsystem", 00:14:44.079 "trtype": "$TEST_TRANSPORT", 00:14:44.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:44.079 "adrfam": "ipv4", 00:14:44.079 "trsvcid": "$NVMF_PORT", 00:14:44.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:44.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:44.079 "hdgst": ${hdgst:-false}, 00:14:44.079 "ddgst": ${ddgst:-false} 00:14:44.079 }, 00:14:44.079 "method": "bdev_nvme_attach_controller" 00:14:44.079 } 00:14:44.079 EOF 00:14:44.079 )") 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:14:44.079 15:38:23 -- nvmf/common.sh@542 -- # cat 00:14:44.079 15:38:23 -- nvmf/common.sh@520 -- # config=() 00:14:44.079 15:38:23 -- nvmf/common.sh@520 -- # local subsystem config 00:14:44.079 15:38:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:44.079 15:38:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:44.079 { 00:14:44.079 "params": { 00:14:44.079 "name": "Nvme$subsystem", 00:14:44.079 "trtype": "$TEST_TRANSPORT", 00:14:44.079 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:44.079 "adrfam": "ipv4", 00:14:44.079 "trsvcid": "$NVMF_PORT", 00:14:44.079 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:44.079 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:44.079 "hdgst": ${hdgst:-false}, 00:14:44.079 "ddgst": ${ddgst:-false} 00:14:44.079 }, 00:14:44.079 "method": "bdev_nvme_attach_controller" 00:14:44.079 } 00:14:44.079 EOF 00:14:44.079 )") 00:14:44.079 15:38:23 -- nvmf/common.sh@542 -- # cat 00:14:44.079 15:38:23 -- nvmf/common.sh@542 -- # cat 00:14:44.079 15:38:23 -- nvmf/common.sh@544 -- # jq . 00:14:44.079 15:38:23 -- target/bdev_io_wait.sh@37 -- # wait 2102207 00:14:44.079 15:38:23 -- nvmf/common.sh@544 -- # jq . 00:14:44.079 15:38:23 -- nvmf/common.sh@545 -- # IFS=, 00:14:44.079 15:38:23 -- nvmf/common.sh@544 -- # jq . 00:14:44.080 15:38:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:44.080 "params": { 00:14:44.080 "name": "Nvme1", 00:14:44.080 "trtype": "tcp", 00:14:44.080 "traddr": "10.0.0.2", 00:14:44.080 "adrfam": "ipv4", 00:14:44.080 "trsvcid": "4420", 00:14:44.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:44.080 "hdgst": false, 00:14:44.080 "ddgst": false 00:14:44.080 }, 00:14:44.080 "method": "bdev_nvme_attach_controller" 00:14:44.080 }' 00:14:44.080 15:38:23 -- nvmf/common.sh@544 -- # jq . 00:14:44.080 15:38:23 -- nvmf/common.sh@545 -- # IFS=, 00:14:44.080 15:38:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:44.080 "params": { 00:14:44.080 "name": "Nvme1", 00:14:44.080 "trtype": "tcp", 00:14:44.080 "traddr": "10.0.0.2", 00:14:44.080 "adrfam": "ipv4", 00:14:44.080 "trsvcid": "4420", 00:14:44.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:44.080 "hdgst": false, 00:14:44.080 "ddgst": false 00:14:44.080 }, 00:14:44.080 "method": "bdev_nvme_attach_controller" 00:14:44.080 }' 00:14:44.080 15:38:23 -- nvmf/common.sh@545 -- # IFS=, 00:14:44.080 15:38:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:44.080 "params": { 00:14:44.080 "name": "Nvme1", 00:14:44.080 "trtype": "tcp", 00:14:44.080 "traddr": "10.0.0.2", 00:14:44.080 "adrfam": "ipv4", 00:14:44.080 "trsvcid": "4420", 00:14:44.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:44.080 "hdgst": false, 00:14:44.080 "ddgst": false 00:14:44.080 }, 00:14:44.080 "method": "bdev_nvme_attach_controller" 00:14:44.080 }' 00:14:44.080 15:38:23 -- nvmf/common.sh@545 -- # IFS=, 00:14:44.080 15:38:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:44.080 "params": { 00:14:44.080 "name": "Nvme1", 00:14:44.080 "trtype": "tcp", 00:14:44.080 "traddr": "10.0.0.2", 00:14:44.080 "adrfam": "ipv4", 00:14:44.080 "trsvcid": "4420", 00:14:44.080 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:14:44.080 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:14:44.080 "hdgst": false, 00:14:44.080 "ddgst": false 00:14:44.080 }, 00:14:44.080 "method": "bdev_nvme_attach_controller" 00:14:44.080 }' 00:14:44.338 [2024-07-10 15:38:23.460763] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:44.338 [2024-07-10 15:38:23.460768] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:44.338 [2024-07-10 15:38:23.460762] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:44.338 [2024-07-10 15:38:23.460852] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-10 15:38:23.460853] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-10 15:38:23.460853] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:14:44.338 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:14:44.338 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:14:44.338 [2024-07-10 15:38:23.462200] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:44.338 [2024-07-10 15:38:23.462268] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:14:44.338 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.338 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.338 [2024-07-10 15:38:23.635682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.338 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.595 [2024-07-10 15:38:23.730978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:14:44.595 [2024-07-10 15:38:23.737654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.595 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.595 [2024-07-10 15:38:23.813301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.595 [2024-07-10 15:38:23.832192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:14:44.595 [2024-07-10 15:38:23.903099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:14:44.595 [2024-07-10 15:38:23.919352] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:44.853 [2024-07-10 15:38:24.016354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:44.853 Running I/O for 1 seconds... 00:14:44.853 Running I/O for 1 seconds... 00:14:45.110 Running I/O for 1 seconds... 00:14:45.110 Running I/O for 1 seconds... 00:14:46.052 00:14:46.052 Latency(us) 00:14:46.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.052 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:14:46.052 Nvme1n1 : 1.02 7612.69 29.74 0.00 0.00 16674.35 6505.05 25049.32 00:14:46.052 =================================================================================================================== 00:14:46.052 Total : 7612.69 29.74 0.00 0.00 16674.35 6505.05 25049.32 00:14:46.052 00:14:46.052 Latency(us) 00:14:46.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.052 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:14:46.052 Nvme1n1 : 1.00 191967.12 749.87 0.00 0.00 664.22 285.20 928.43 00:14:46.052 =================================================================================================================== 00:14:46.052 Total : 191967.12 749.87 0.00 0.00 664.22 285.20 928.43 00:14:46.052 00:14:46.052 Latency(us) 00:14:46.052 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.052 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:14:46.052 Nvme1n1 : 1.01 7100.36 27.74 0.00 0.00 17963.05 6505.05 34952.53 00:14:46.053 =================================================================================================================== 00:14:46.053 Total : 7100.36 27.74 0.00 0.00 17963.05 6505.05 34952.53 00:14:46.053 00:14:46.053 Latency(us) 00:14:46.053 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:46.053 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:14:46.053 Nvme1n1 : 1.01 10140.06 39.61 0.00 0.00 12579.98 3276.80 22816.24 00:14:46.053 =================================================================================================================== 00:14:46.053 Total : 10140.06 39.61 0.00 0.00 12579.98 3276.80 22816.24 00:14:46.309 15:38:25 -- target/bdev_io_wait.sh@38 -- # wait 2102209 00:14:46.309 15:38:25 -- target/bdev_io_wait.sh@39 -- # wait 2102211 00:14:46.309 15:38:25 -- target/bdev_io_wait.sh@40 -- # wait 2102214 00:14:46.309 15:38:25 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:46.309 15:38:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:46.309 15:38:25 -- common/autotest_common.sh@10 -- # set +x 00:14:46.309 15:38:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:46.309 15:38:25 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:14:46.309 15:38:25 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:14:46.309 15:38:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:46.309 15:38:25 -- nvmf/common.sh@116 -- # sync 00:14:46.309 15:38:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:46.309 15:38:25 -- nvmf/common.sh@119 -- # set +e 00:14:46.309 15:38:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:46.309 15:38:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:46.309 rmmod nvme_tcp 00:14:46.309 rmmod nvme_fabrics 00:14:46.309 rmmod nvme_keyring 00:14:46.309 15:38:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:46.310 15:38:25 -- nvmf/common.sh@123 -- # set -e 00:14:46.310 15:38:25 -- nvmf/common.sh@124 -- # return 0 00:14:46.310 15:38:25 -- nvmf/common.sh@477 -- # '[' -n 2102179 ']' 00:14:46.310 15:38:25 -- nvmf/common.sh@478 -- # killprocess 2102179 00:14:46.310 15:38:25 -- common/autotest_common.sh@926 -- # '[' -z 2102179 ']' 00:14:46.310 15:38:25 -- common/autotest_common.sh@930 -- # kill -0 2102179 00:14:46.310 15:38:25 -- common/autotest_common.sh@931 -- # uname 00:14:46.310 15:38:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:46.310 15:38:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2102179 00:14:46.566 15:38:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:46.566 15:38:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:46.566 15:38:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2102179' 00:14:46.566 killing process with pid 2102179 00:14:46.566 15:38:25 -- common/autotest_common.sh@945 -- # kill 2102179 00:14:46.566 15:38:25 -- common/autotest_common.sh@950 -- # wait 2102179 00:14:46.822 15:38:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:46.822 15:38:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:46.822 15:38:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:46.822 15:38:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:46.822 15:38:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:46.822 15:38:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.822 15:38:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.822 15:38:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.719 15:38:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:48.719 00:14:48.719 real 0m7.232s 00:14:48.719 user 0m16.675s 00:14:48.719 sys 0m3.392s 00:14:48.719 15:38:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:48.719 15:38:28 -- common/autotest_common.sh@10 -- # set +x 00:14:48.719 ************************************ 00:14:48.719 END TEST nvmf_bdev_io_wait 00:14:48.719 ************************************ 00:14:48.719 15:38:28 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:48.719 15:38:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:48.719 15:38:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:48.719 15:38:28 -- common/autotest_common.sh@10 -- # set +x 00:14:48.719 ************************************ 00:14:48.719 START TEST nvmf_queue_depth 00:14:48.719 ************************************ 00:14:48.719 15:38:28 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:14:48.719 * Looking for test storage... 00:14:48.719 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:48.719 15:38:28 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.719 15:38:28 -- nvmf/common.sh@7 -- # uname -s 00:14:48.719 15:38:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.719 15:38:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.719 15:38:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.719 15:38:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.719 15:38:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.719 15:38:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.719 15:38:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.719 15:38:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.719 15:38:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.719 15:38:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.719 15:38:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.719 15:38:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:48.719 15:38:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.719 15:38:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.719 15:38:28 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.719 15:38:28 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:48.719 15:38:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.719 15:38:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.719 15:38:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.720 15:38:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.720 15:38:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.720 15:38:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.720 15:38:28 -- paths/export.sh@5 -- # export PATH 00:14:48.720 15:38:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.720 15:38:28 -- nvmf/common.sh@46 -- # : 0 00:14:48.720 15:38:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:48.720 15:38:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:48.720 15:38:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:48.720 15:38:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.720 15:38:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.720 15:38:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:48.720 15:38:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:48.720 15:38:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:48.978 15:38:28 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:14:48.978 15:38:28 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:14:48.978 15:38:28 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:48.978 15:38:28 -- target/queue_depth.sh@19 -- # nvmftestinit 00:14:48.978 15:38:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:48.978 15:38:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:48.978 15:38:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:48.978 15:38:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:48.978 15:38:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:48.978 15:38:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:48.978 15:38:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:48.978 15:38:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.978 15:38:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:48.978 15:38:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:48.978 15:38:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:48.978 15:38:28 -- common/autotest_common.sh@10 -- # set +x 00:14:50.890 15:38:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:50.890 15:38:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:50.890 15:38:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:50.890 15:38:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:50.890 15:38:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:50.890 15:38:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:50.890 15:38:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:50.890 15:38:30 -- nvmf/common.sh@294 -- # net_devs=() 00:14:50.890 15:38:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:50.890 15:38:30 -- nvmf/common.sh@295 -- # e810=() 00:14:50.890 15:38:30 -- nvmf/common.sh@295 -- # local -ga e810 00:14:50.890 15:38:30 -- nvmf/common.sh@296 -- # x722=() 00:14:50.890 15:38:30 -- nvmf/common.sh@296 -- # local -ga x722 00:14:50.890 15:38:30 -- nvmf/common.sh@297 -- # mlx=() 00:14:50.890 15:38:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:50.890 15:38:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:50.890 15:38:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:50.890 15:38:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:50.890 15:38:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:50.890 15:38:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:50.890 15:38:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:50.890 15:38:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:50.890 15:38:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:50.890 15:38:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:50.890 15:38:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:50.890 15:38:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:50.890 15:38:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:50.890 15:38:30 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:50.890 15:38:30 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:50.890 15:38:30 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:50.890 15:38:30 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:50.890 15:38:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:50.890 15:38:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:50.890 15:38:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:50.890 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:50.890 15:38:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:50.890 15:38:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:50.890 15:38:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.890 15:38:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.890 15:38:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:50.890 15:38:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:50.890 15:38:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:50.890 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:50.890 15:38:30 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:50.890 15:38:30 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:50.890 15:38:30 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:50.890 15:38:30 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:50.890 15:38:30 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:50.890 15:38:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:50.890 15:38:30 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:50.890 15:38:30 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:50.890 15:38:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:50.890 15:38:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.890 15:38:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:50.890 15:38:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.890 15:38:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:50.890 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:50.890 15:38:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.890 15:38:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:50.890 15:38:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:50.890 15:38:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:50.890 15:38:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:50.890 15:38:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:50.890 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:50.890 15:38:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:50.890 15:38:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:50.890 15:38:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:50.890 15:38:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:50.890 15:38:30 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:50.890 15:38:30 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:50.890 15:38:30 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:50.890 15:38:30 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:50.890 15:38:30 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:50.890 15:38:30 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:50.890 15:38:30 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:50.890 15:38:30 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:50.890 15:38:30 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:50.890 15:38:30 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:50.890 15:38:30 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:50.890 15:38:30 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:50.890 15:38:30 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:50.890 15:38:30 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:50.890 15:38:30 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:50.890 15:38:30 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:50.890 15:38:30 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:50.890 15:38:30 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:50.890 15:38:30 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:50.890 15:38:30 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:50.890 15:38:30 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:50.890 15:38:30 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:50.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:50.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:14:50.890 00:14:50.890 --- 10.0.0.2 ping statistics --- 00:14:50.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.890 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:14:50.890 15:38:30 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:50.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:50.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:14:50.890 00:14:50.890 --- 10.0.0.1 ping statistics --- 00:14:50.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:50.890 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:14:50.890 15:38:30 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:50.890 15:38:30 -- nvmf/common.sh@410 -- # return 0 00:14:50.890 15:38:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:50.890 15:38:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:50.890 15:38:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:50.890 15:38:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:50.890 15:38:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:50.890 15:38:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:50.890 15:38:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:50.890 15:38:30 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:14:50.890 15:38:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:50.890 15:38:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:50.890 15:38:30 -- common/autotest_common.sh@10 -- # set +x 00:14:50.890 15:38:30 -- nvmf/common.sh@469 -- # nvmfpid=2104448 00:14:50.890 15:38:30 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:50.890 15:38:30 -- nvmf/common.sh@470 -- # waitforlisten 2104448 00:14:50.890 15:38:30 -- common/autotest_common.sh@819 -- # '[' -z 2104448 ']' 00:14:50.890 15:38:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.890 15:38:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:50.890 15:38:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.890 15:38:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:50.890 15:38:30 -- common/autotest_common.sh@10 -- # set +x 00:14:50.890 [2024-07-10 15:38:30.261737] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:50.890 [2024-07-10 15:38:30.261814] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:51.149 EAL: No free 2048 kB hugepages reported on node 1 00:14:51.149 [2024-07-10 15:38:30.336875] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:51.149 [2024-07-10 15:38:30.456077] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:51.149 [2024-07-10 15:38:30.456265] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:51.149 [2024-07-10 15:38:30.456297] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:51.149 [2024-07-10 15:38:30.456319] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:51.149 [2024-07-10 15:38:30.456376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.082 15:38:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:52.082 15:38:31 -- common/autotest_common.sh@852 -- # return 0 00:14:52.082 15:38:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:52.082 15:38:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:52.082 15:38:31 -- common/autotest_common.sh@10 -- # set +x 00:14:52.082 15:38:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:52.082 15:38:31 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:52.082 15:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.082 15:38:31 -- common/autotest_common.sh@10 -- # set +x 00:14:52.082 [2024-07-10 15:38:31.252055] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:52.082 15:38:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.082 15:38:31 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:52.082 15:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.082 15:38:31 -- common/autotest_common.sh@10 -- # set +x 00:14:52.082 Malloc0 00:14:52.082 15:38:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.082 15:38:31 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:52.082 15:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.082 15:38:31 -- common/autotest_common.sh@10 -- # set +x 00:14:52.082 15:38:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.082 15:38:31 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:52.082 15:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.082 15:38:31 -- common/autotest_common.sh@10 -- # set +x 00:14:52.082 15:38:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.082 15:38:31 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:52.082 15:38:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:52.082 15:38:31 -- common/autotest_common.sh@10 -- # set +x 00:14:52.082 [2024-07-10 15:38:31.315325] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:52.082 15:38:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:52.082 15:38:31 -- target/queue_depth.sh@30 -- # bdevperf_pid=2104612 00:14:52.082 15:38:31 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:14:52.082 15:38:31 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:52.082 15:38:31 -- target/queue_depth.sh@33 -- # waitforlisten 2104612 /var/tmp/bdevperf.sock 00:14:52.082 15:38:31 -- common/autotest_common.sh@819 -- # '[' -z 2104612 ']' 00:14:52.082 15:38:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:52.082 15:38:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:52.082 15:38:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:52.082 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:52.082 15:38:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:52.082 15:38:31 -- common/autotest_common.sh@10 -- # set +x 00:14:52.082 [2024-07-10 15:38:31.357252] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:52.082 [2024-07-10 15:38:31.357313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104612 ] 00:14:52.082 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.082 [2024-07-10 15:38:31.417308] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:52.340 [2024-07-10 15:38:31.532096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.326 15:38:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:53.326 15:38:32 -- common/autotest_common.sh@852 -- # return 0 00:14:53.326 15:38:32 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:14:53.326 15:38:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:53.326 15:38:32 -- common/autotest_common.sh@10 -- # set +x 00:14:53.326 NVMe0n1 00:14:53.326 15:38:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:53.326 15:38:32 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:53.326 Running I/O for 10 seconds... 00:15:05.526 00:15:05.526 Latency(us) 00:15:05.526 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.526 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:05.526 Verification LBA range: start 0x0 length 0x4000 00:15:05.526 NVMe0n1 : 10.07 12371.90 48.33 0.00 0.00 82439.85 14951.92 62526.20 00:15:05.526 =================================================================================================================== 00:15:05.527 Total : 12371.90 48.33 0.00 0.00 82439.85 14951.92 62526.20 00:15:05.527 0 00:15:05.527 15:38:42 -- target/queue_depth.sh@39 -- # killprocess 2104612 00:15:05.527 15:38:42 -- common/autotest_common.sh@926 -- # '[' -z 2104612 ']' 00:15:05.527 15:38:42 -- common/autotest_common.sh@930 -- # kill -0 2104612 00:15:05.527 15:38:42 -- common/autotest_common.sh@931 -- # uname 00:15:05.527 15:38:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:05.527 15:38:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2104612 00:15:05.527 15:38:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:05.527 15:38:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:05.527 15:38:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2104612' 00:15:05.527 killing process with pid 2104612 00:15:05.527 15:38:42 -- common/autotest_common.sh@945 -- # kill 2104612 00:15:05.527 Received shutdown signal, test time was about 10.000000 seconds 00:15:05.527 00:15:05.527 Latency(us) 00:15:05.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:05.527 =================================================================================================================== 00:15:05.527 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:05.527 15:38:42 -- common/autotest_common.sh@950 -- # wait 2104612 00:15:05.527 15:38:43 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:05.527 15:38:43 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:05.527 15:38:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:05.527 15:38:43 -- nvmf/common.sh@116 -- # sync 00:15:05.527 15:38:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:05.527 15:38:43 -- nvmf/common.sh@119 -- # set +e 00:15:05.527 15:38:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:05.527 15:38:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:05.527 rmmod nvme_tcp 00:15:05.527 rmmod nvme_fabrics 00:15:05.527 rmmod nvme_keyring 00:15:05.527 15:38:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:05.527 15:38:43 -- nvmf/common.sh@123 -- # set -e 00:15:05.527 15:38:43 -- nvmf/common.sh@124 -- # return 0 00:15:05.527 15:38:43 -- nvmf/common.sh@477 -- # '[' -n 2104448 ']' 00:15:05.527 15:38:43 -- nvmf/common.sh@478 -- # killprocess 2104448 00:15:05.527 15:38:43 -- common/autotest_common.sh@926 -- # '[' -z 2104448 ']' 00:15:05.527 15:38:43 -- common/autotest_common.sh@930 -- # kill -0 2104448 00:15:05.527 15:38:43 -- common/autotest_common.sh@931 -- # uname 00:15:05.527 15:38:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:05.527 15:38:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2104448 00:15:05.527 15:38:43 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:05.527 15:38:43 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:05.527 15:38:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2104448' 00:15:05.527 killing process with pid 2104448 00:15:05.527 15:38:43 -- common/autotest_common.sh@945 -- # kill 2104448 00:15:05.527 15:38:43 -- common/autotest_common.sh@950 -- # wait 2104448 00:15:05.527 15:38:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:05.527 15:38:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:05.527 15:38:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:05.527 15:38:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.527 15:38:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:05.527 15:38:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.527 15:38:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.527 15:38:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.465 15:38:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:06.465 00:15:06.465 real 0m17.442s 00:15:06.465 user 0m25.096s 00:15:06.465 sys 0m3.145s 00:15:06.465 15:38:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:06.465 15:38:45 -- common/autotest_common.sh@10 -- # set +x 00:15:06.465 ************************************ 00:15:06.465 END TEST nvmf_queue_depth 00:15:06.465 ************************************ 00:15:06.465 15:38:45 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:06.465 15:38:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:06.465 15:38:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:06.465 15:38:45 -- common/autotest_common.sh@10 -- # set +x 00:15:06.465 ************************************ 00:15:06.465 START TEST nvmf_multipath 00:15:06.465 ************************************ 00:15:06.465 15:38:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:06.465 * Looking for test storage... 00:15:06.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:06.465 15:38:45 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.465 15:38:45 -- nvmf/common.sh@7 -- # uname -s 00:15:06.465 15:38:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.465 15:38:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.465 15:38:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.465 15:38:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.465 15:38:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.465 15:38:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.465 15:38:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.465 15:38:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.465 15:38:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.465 15:38:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.465 15:38:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:06.465 15:38:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:06.465 15:38:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.465 15:38:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.465 15:38:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:06.465 15:38:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:06.465 15:38:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.465 15:38:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.465 15:38:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.465 15:38:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.465 15:38:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.465 15:38:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.465 15:38:45 -- paths/export.sh@5 -- # export PATH 00:15:06.465 15:38:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.465 15:38:45 -- nvmf/common.sh@46 -- # : 0 00:15:06.465 15:38:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:06.465 15:38:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:06.465 15:38:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:06.465 15:38:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.465 15:38:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.465 15:38:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:06.465 15:38:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:06.465 15:38:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:06.465 15:38:45 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:06.465 15:38:45 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:06.465 15:38:45 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:06.465 15:38:45 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:06.465 15:38:45 -- target/multipath.sh@43 -- # nvmftestinit 00:15:06.465 15:38:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:06.465 15:38:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.465 15:38:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:06.465 15:38:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:06.465 15:38:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:06.465 15:38:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.465 15:38:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.465 15:38:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.465 15:38:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:06.465 15:38:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:06.465 15:38:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:06.465 15:38:45 -- common/autotest_common.sh@10 -- # set +x 00:15:08.366 15:38:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:08.366 15:38:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:08.366 15:38:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:08.366 15:38:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:08.366 15:38:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:08.366 15:38:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:08.366 15:38:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:08.366 15:38:47 -- nvmf/common.sh@294 -- # net_devs=() 00:15:08.366 15:38:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:08.366 15:38:47 -- nvmf/common.sh@295 -- # e810=() 00:15:08.366 15:38:47 -- nvmf/common.sh@295 -- # local -ga e810 00:15:08.366 15:38:47 -- nvmf/common.sh@296 -- # x722=() 00:15:08.366 15:38:47 -- nvmf/common.sh@296 -- # local -ga x722 00:15:08.366 15:38:47 -- nvmf/common.sh@297 -- # mlx=() 00:15:08.366 15:38:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:08.366 15:38:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:08.366 15:38:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:08.366 15:38:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:08.366 15:38:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:08.366 15:38:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:08.366 15:38:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:08.366 15:38:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:08.366 15:38:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:08.366 15:38:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:08.366 15:38:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:08.366 15:38:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:08.366 15:38:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:08.366 15:38:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:08.366 15:38:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:08.366 15:38:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:08.366 15:38:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:08.366 15:38:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:08.366 15:38:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:08.366 15:38:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:08.366 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:08.366 15:38:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:08.366 15:38:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:08.366 15:38:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.366 15:38:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.366 15:38:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:08.366 15:38:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:08.366 15:38:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:08.366 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:08.366 15:38:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:08.366 15:38:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:08.366 15:38:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:08.366 15:38:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:08.366 15:38:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:08.367 15:38:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:08.367 15:38:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:08.367 15:38:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:08.367 15:38:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:08.367 15:38:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.367 15:38:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:08.367 15:38:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.367 15:38:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:08.367 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:08.367 15:38:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.367 15:38:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:08.367 15:38:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.367 15:38:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:08.367 15:38:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.367 15:38:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:08.367 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:08.367 15:38:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.367 15:38:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:08.367 15:38:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:08.367 15:38:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:08.367 15:38:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:08.367 15:38:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:08.367 15:38:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:08.367 15:38:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:08.367 15:38:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:08.367 15:38:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:08.367 15:38:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:08.367 15:38:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:08.367 15:38:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:08.367 15:38:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:08.367 15:38:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:08.367 15:38:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:08.367 15:38:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:08.367 15:38:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:08.367 15:38:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:08.367 15:38:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:08.367 15:38:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:08.367 15:38:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:08.367 15:38:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:08.367 15:38:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:08.367 15:38:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:08.367 15:38:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:08.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:08.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:15:08.367 00:15:08.367 --- 10.0.0.2 ping statistics --- 00:15:08.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.367 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:15:08.367 15:38:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:08.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:08.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:15:08.367 00:15:08.367 --- 10.0.0.1 ping statistics --- 00:15:08.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:08.367 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:15:08.367 15:38:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:08.367 15:38:47 -- nvmf/common.sh@410 -- # return 0 00:15:08.367 15:38:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:08.367 15:38:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:08.367 15:38:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:08.367 15:38:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:08.367 15:38:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:08.367 15:38:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:08.367 15:38:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:08.367 15:38:47 -- target/multipath.sh@45 -- # '[' -z ']' 00:15:08.367 15:38:47 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:15:08.367 only one NIC for nvmf test 00:15:08.367 15:38:47 -- target/multipath.sh@47 -- # nvmftestfini 00:15:08.367 15:38:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:08.367 15:38:47 -- nvmf/common.sh@116 -- # sync 00:15:08.367 15:38:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:08.367 15:38:47 -- nvmf/common.sh@119 -- # set +e 00:15:08.367 15:38:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:08.367 15:38:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:08.627 rmmod nvme_tcp 00:15:08.627 rmmod nvme_fabrics 00:15:08.627 rmmod nvme_keyring 00:15:08.627 15:38:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:08.627 15:38:47 -- nvmf/common.sh@123 -- # set -e 00:15:08.627 15:38:47 -- nvmf/common.sh@124 -- # return 0 00:15:08.627 15:38:47 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:15:08.627 15:38:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:08.627 15:38:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:08.627 15:38:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:08.627 15:38:47 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:08.627 15:38:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:08.627 15:38:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.627 15:38:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:08.627 15:38:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.533 15:38:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:10.533 15:38:49 -- target/multipath.sh@48 -- # exit 0 00:15:10.533 15:38:49 -- target/multipath.sh@1 -- # nvmftestfini 00:15:10.533 15:38:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:10.533 15:38:49 -- nvmf/common.sh@116 -- # sync 00:15:10.533 15:38:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:10.533 15:38:49 -- nvmf/common.sh@119 -- # set +e 00:15:10.533 15:38:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:10.533 15:38:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:10.533 15:38:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:10.533 15:38:49 -- nvmf/common.sh@123 -- # set -e 00:15:10.533 15:38:49 -- nvmf/common.sh@124 -- # return 0 00:15:10.533 15:38:49 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:15:10.533 15:38:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:10.533 15:38:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:10.533 15:38:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:10.533 15:38:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:10.533 15:38:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:10.533 15:38:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.533 15:38:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.533 15:38:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.533 15:38:49 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:10.533 00:15:10.533 real 0m4.344s 00:15:10.533 user 0m0.826s 00:15:10.533 sys 0m1.510s 00:15:10.533 15:38:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:10.533 15:38:49 -- common/autotest_common.sh@10 -- # set +x 00:15:10.533 ************************************ 00:15:10.533 END TEST nvmf_multipath 00:15:10.533 ************************************ 00:15:10.533 15:38:49 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:10.533 15:38:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:10.533 15:38:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:10.533 15:38:49 -- common/autotest_common.sh@10 -- # set +x 00:15:10.533 ************************************ 00:15:10.533 START TEST nvmf_zcopy 00:15:10.533 ************************************ 00:15:10.533 15:38:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:10.791 * Looking for test storage... 00:15:10.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:10.791 15:38:49 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:10.792 15:38:49 -- nvmf/common.sh@7 -- # uname -s 00:15:10.792 15:38:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:10.792 15:38:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:10.792 15:38:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:10.792 15:38:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:10.792 15:38:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:10.792 15:38:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:10.792 15:38:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:10.792 15:38:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:10.792 15:38:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:10.792 15:38:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:10.792 15:38:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.792 15:38:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.792 15:38:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:10.792 15:38:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:10.792 15:38:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:10.792 15:38:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:10.792 15:38:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:10.792 15:38:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:10.792 15:38:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:10.792 15:38:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.792 15:38:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.792 15:38:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.792 15:38:49 -- paths/export.sh@5 -- # export PATH 00:15:10.792 15:38:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:10.792 15:38:49 -- nvmf/common.sh@46 -- # : 0 00:15:10.792 15:38:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:10.792 15:38:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:10.792 15:38:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:10.792 15:38:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:10.792 15:38:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:10.792 15:38:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:10.792 15:38:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:10.792 15:38:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:10.792 15:38:49 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:10.792 15:38:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:10.792 15:38:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:10.792 15:38:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:10.792 15:38:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:10.792 15:38:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:10.792 15:38:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:10.792 15:38:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:10.792 15:38:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:10.792 15:38:49 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:10.792 15:38:49 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:10.792 15:38:49 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:10.792 15:38:49 -- common/autotest_common.sh@10 -- # set +x 00:15:12.692 15:38:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:12.692 15:38:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:12.692 15:38:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:12.692 15:38:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:12.692 15:38:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:12.692 15:38:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:12.692 15:38:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:12.692 15:38:52 -- nvmf/common.sh@294 -- # net_devs=() 00:15:12.692 15:38:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:12.692 15:38:52 -- nvmf/common.sh@295 -- # e810=() 00:15:12.692 15:38:52 -- nvmf/common.sh@295 -- # local -ga e810 00:15:12.692 15:38:52 -- nvmf/common.sh@296 -- # x722=() 00:15:12.692 15:38:52 -- nvmf/common.sh@296 -- # local -ga x722 00:15:12.692 15:38:52 -- nvmf/common.sh@297 -- # mlx=() 00:15:12.692 15:38:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:12.692 15:38:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:12.692 15:38:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:12.692 15:38:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:12.692 15:38:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:12.692 15:38:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:12.692 15:38:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:12.692 15:38:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:12.692 15:38:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:12.692 15:38:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:12.692 15:38:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:12.692 15:38:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:12.692 15:38:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:12.692 15:38:52 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:12.692 15:38:52 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:12.692 15:38:52 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:12.692 15:38:52 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:12.692 15:38:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:12.692 15:38:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:12.692 15:38:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:12.692 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:12.692 15:38:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:12.692 15:38:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:12.692 15:38:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.692 15:38:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.692 15:38:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:12.692 15:38:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:12.692 15:38:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:12.692 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:12.692 15:38:52 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:12.692 15:38:52 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:12.692 15:38:52 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:12.692 15:38:52 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:12.692 15:38:52 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:12.692 15:38:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:12.692 15:38:52 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:12.692 15:38:52 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:12.692 15:38:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:12.692 15:38:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.692 15:38:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:12.692 15:38:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.692 15:38:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:12.692 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:12.692 15:38:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.692 15:38:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:12.692 15:38:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:12.692 15:38:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:12.692 15:38:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:12.692 15:38:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:12.692 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:12.692 15:38:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:12.692 15:38:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:12.692 15:38:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:12.692 15:38:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:12.692 15:38:52 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:12.692 15:38:52 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:12.692 15:38:52 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:12.692 15:38:52 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:12.692 15:38:52 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:12.692 15:38:52 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:12.692 15:38:52 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:12.692 15:38:52 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:12.692 15:38:52 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:12.692 15:38:52 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:12.692 15:38:52 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:12.692 15:38:52 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:12.692 15:38:52 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:12.692 15:38:52 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:12.692 15:38:52 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:12.952 15:38:52 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:12.952 15:38:52 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:12.952 15:38:52 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:12.952 15:38:52 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:12.952 15:38:52 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:12.952 15:38:52 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:12.952 15:38:52 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:12.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:15:12.952 00:15:12.952 --- 10.0.0.2 ping statistics --- 00:15:12.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.952 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:15:12.952 15:38:52 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:12.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:15:12.952 00:15:12.952 --- 10.0.0.1 ping statistics --- 00:15:12.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.952 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:15:12.952 15:38:52 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.952 15:38:52 -- nvmf/common.sh@410 -- # return 0 00:15:12.952 15:38:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:12.952 15:38:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.952 15:38:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:12.952 15:38:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:12.952 15:38:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.952 15:38:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:12.952 15:38:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:12.952 15:38:52 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:12.952 15:38:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:12.952 15:38:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:12.952 15:38:52 -- common/autotest_common.sh@10 -- # set +x 00:15:12.952 15:38:52 -- nvmf/common.sh@469 -- # nvmfpid=2109859 00:15:12.952 15:38:52 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:12.952 15:38:52 -- nvmf/common.sh@470 -- # waitforlisten 2109859 00:15:12.952 15:38:52 -- common/autotest_common.sh@819 -- # '[' -z 2109859 ']' 00:15:12.952 15:38:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.952 15:38:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:12.952 15:38:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.952 15:38:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:12.952 15:38:52 -- common/autotest_common.sh@10 -- # set +x 00:15:12.952 [2024-07-10 15:38:52.216379] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:12.952 [2024-07-10 15:38:52.216476] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.952 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.952 [2024-07-10 15:38:52.282169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.210 [2024-07-10 15:38:52.390109] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:13.210 [2024-07-10 15:38:52.390254] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:13.210 [2024-07-10 15:38:52.390278] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:13.211 [2024-07-10 15:38:52.390295] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:13.211 [2024-07-10 15:38:52.390328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.162 15:38:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:14.162 15:38:53 -- common/autotest_common.sh@852 -- # return 0 00:15:14.162 15:38:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:14.162 15:38:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:14.162 15:38:53 -- common/autotest_common.sh@10 -- # set +x 00:15:14.162 15:38:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.162 15:38:53 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:14.162 15:38:53 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:14.162 15:38:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:14.162 15:38:53 -- common/autotest_common.sh@10 -- # set +x 00:15:14.162 [2024-07-10 15:38:53.205453] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:14.162 15:38:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:14.162 15:38:53 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:14.162 15:38:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:14.162 15:38:53 -- common/autotest_common.sh@10 -- # set +x 00:15:14.162 15:38:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:14.162 15:38:53 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:14.162 15:38:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:14.162 15:38:53 -- common/autotest_common.sh@10 -- # set +x 00:15:14.162 [2024-07-10 15:38:53.221623] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:14.162 15:38:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:14.162 15:38:53 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:14.162 15:38:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:14.162 15:38:53 -- common/autotest_common.sh@10 -- # set +x 00:15:14.162 15:38:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:14.162 15:38:53 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:14.162 15:38:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:14.162 15:38:53 -- common/autotest_common.sh@10 -- # set +x 00:15:14.162 malloc0 00:15:14.162 15:38:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:14.162 15:38:53 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:14.162 15:38:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:14.162 15:38:53 -- common/autotest_common.sh@10 -- # set +x 00:15:14.162 15:38:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:14.162 15:38:53 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:14.162 15:38:53 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:14.162 15:38:53 -- nvmf/common.sh@520 -- # config=() 00:15:14.162 15:38:53 -- nvmf/common.sh@520 -- # local subsystem config 00:15:14.162 15:38:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:14.162 15:38:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:14.162 { 00:15:14.162 "params": { 00:15:14.162 "name": "Nvme$subsystem", 00:15:14.162 "trtype": "$TEST_TRANSPORT", 00:15:14.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:14.162 "adrfam": "ipv4", 00:15:14.162 "trsvcid": "$NVMF_PORT", 00:15:14.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:14.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:14.162 "hdgst": ${hdgst:-false}, 00:15:14.162 "ddgst": ${ddgst:-false} 00:15:14.162 }, 00:15:14.162 "method": "bdev_nvme_attach_controller" 00:15:14.162 } 00:15:14.162 EOF 00:15:14.162 )") 00:15:14.162 15:38:53 -- nvmf/common.sh@542 -- # cat 00:15:14.162 15:38:53 -- nvmf/common.sh@544 -- # jq . 00:15:14.162 15:38:53 -- nvmf/common.sh@545 -- # IFS=, 00:15:14.162 15:38:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:14.162 "params": { 00:15:14.162 "name": "Nvme1", 00:15:14.162 "trtype": "tcp", 00:15:14.162 "traddr": "10.0.0.2", 00:15:14.162 "adrfam": "ipv4", 00:15:14.162 "trsvcid": "4420", 00:15:14.162 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:14.162 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:14.162 "hdgst": false, 00:15:14.162 "ddgst": false 00:15:14.162 }, 00:15:14.162 "method": "bdev_nvme_attach_controller" 00:15:14.162 }' 00:15:14.162 [2024-07-10 15:38:53.300675] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:14.162 [2024-07-10 15:38:53.300758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2110012 ] 00:15:14.162 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.162 [2024-07-10 15:38:53.369765] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.162 [2024-07-10 15:38:53.488852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.729 Running I/O for 10 seconds... 00:15:24.693 00:15:24.693 Latency(us) 00:15:24.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:24.693 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:24.693 Verification LBA range: start 0x0 length 0x1000 00:15:24.693 Nvme1n1 : 10.01 7501.79 58.61 0.00 0.00 17017.83 807.06 26991.12 00:15:24.693 =================================================================================================================== 00:15:24.694 Total : 7501.79 58.61 0.00 0.00 17017.83 807.06 26991.12 00:15:24.951 15:39:04 -- target/zcopy.sh@39 -- # perfpid=2111481 00:15:24.951 15:39:04 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:24.951 15:39:04 -- common/autotest_common.sh@10 -- # set +x 00:15:24.951 15:39:04 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:24.951 15:39:04 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:24.951 15:39:04 -- nvmf/common.sh@520 -- # config=() 00:15:24.951 15:39:04 -- nvmf/common.sh@520 -- # local subsystem config 00:15:24.951 15:39:04 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:24.951 15:39:04 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:24.951 { 00:15:24.951 "params": { 00:15:24.951 "name": "Nvme$subsystem", 00:15:24.951 "trtype": "$TEST_TRANSPORT", 00:15:24.951 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:24.951 "adrfam": "ipv4", 00:15:24.951 "trsvcid": "$NVMF_PORT", 00:15:24.951 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:24.951 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:24.951 "hdgst": ${hdgst:-false}, 00:15:24.951 "ddgst": ${ddgst:-false} 00:15:24.951 }, 00:15:24.951 "method": "bdev_nvme_attach_controller" 00:15:24.951 } 00:15:24.951 EOF 00:15:24.951 )") 00:15:24.951 15:39:04 -- nvmf/common.sh@542 -- # cat 00:15:24.951 [2024-07-10 15:39:04.156860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.951 [2024-07-10 15:39:04.156905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.951 15:39:04 -- nvmf/common.sh@544 -- # jq . 00:15:24.951 15:39:04 -- nvmf/common.sh@545 -- # IFS=, 00:15:24.951 15:39:04 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:24.951 "params": { 00:15:24.951 "name": "Nvme1", 00:15:24.951 "trtype": "tcp", 00:15:24.951 "traddr": "10.0.0.2", 00:15:24.951 "adrfam": "ipv4", 00:15:24.951 "trsvcid": "4420", 00:15:24.951 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:24.951 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:24.951 "hdgst": false, 00:15:24.951 "ddgst": false 00:15:24.951 }, 00:15:24.951 "method": "bdev_nvme_attach_controller" 00:15:24.951 }' 00:15:24.951 [2024-07-10 15:39:04.164831] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.951 [2024-07-10 15:39:04.164861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.951 [2024-07-10 15:39:04.172848] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.951 [2024-07-10 15:39:04.172876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.951 [2024-07-10 15:39:04.180870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.951 [2024-07-10 15:39:04.180897] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.951 [2024-07-10 15:39:04.188897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.951 [2024-07-10 15:39:04.188924] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.951 [2024-07-10 15:39:04.193648] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:24.951 [2024-07-10 15:39:04.193715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2111481 ] 00:15:24.951 [2024-07-10 15:39:04.196920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.951 [2024-07-10 15:39:04.196950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.951 [2024-07-10 15:39:04.204938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.951 [2024-07-10 15:39:04.204965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.951 [2024-07-10 15:39:04.212958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.951 [2024-07-10 15:39:04.212985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.951 [2024-07-10 15:39:04.220982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.952 [2024-07-10 15:39:04.221009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.952 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.952 [2024-07-10 15:39:04.229004] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.952 [2024-07-10 15:39:04.229030] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.952 [2024-07-10 15:39:04.237024] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.952 [2024-07-10 15:39:04.237050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.952 [2024-07-10 15:39:04.245043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.952 [2024-07-10 15:39:04.245069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.952 [2024-07-10 15:39:04.253067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.952 [2024-07-10 15:39:04.253094] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.952 [2024-07-10 15:39:04.261091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.952 [2024-07-10 15:39:04.261117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.952 [2024-07-10 15:39:04.262136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:24.952 [2024-07-10 15:39:04.269138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.952 [2024-07-10 15:39:04.269179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.952 [2024-07-10 15:39:04.277161] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.952 [2024-07-10 15:39:04.277198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.952 [2024-07-10 15:39:04.285159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.952 [2024-07-10 15:39:04.285186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.952 [2024-07-10 15:39:04.293180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.952 [2024-07-10 15:39:04.293206] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.952 [2024-07-10 15:39:04.301223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.952 [2024-07-10 15:39:04.301249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.952 [2024-07-10 15:39:04.309227] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.952 [2024-07-10 15:39:04.309253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.952 [2024-07-10 15:39:04.317249] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.952 [2024-07-10 15:39:04.317275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:24.952 [2024-07-10 15:39:04.325277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:24.952 [2024-07-10 15:39:04.325312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.333325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.333364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.341318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.341344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.349341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.349370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.357361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.357387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.365384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.365411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.373404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.373438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.381432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.381458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.381517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.210 [2024-07-10 15:39:04.389453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.389479] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.397497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.397532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.405526] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.405566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.413558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.413610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.421573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.421612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.429594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.429634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.437618] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.437658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.445613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.445640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.453663] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.453701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.461688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.461728] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.469700] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.469744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.477715] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.477741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.485742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.485771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.493757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.493785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.501781] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.501809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.509801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.509829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.517826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.517856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.525851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.525878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.533871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.533899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.541891] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.541918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.549916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.549946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 Running I/O for 5 seconds... 00:15:25.210 [2024-07-10 15:39:04.557938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.557965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.572230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.572271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.210 [2024-07-10 15:39:04.582662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.210 [2024-07-10 15:39:04.582694] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.468 [2024-07-10 15:39:04.594374] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.468 [2024-07-10 15:39:04.594405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.468 [2024-07-10 15:39:04.605282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.468 [2024-07-10 15:39:04.605313] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.468 [2024-07-10 15:39:04.618114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.468 [2024-07-10 15:39:04.618144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.468 [2024-07-10 15:39:04.628325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.468 [2024-07-10 15:39:04.628356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.468 [2024-07-10 15:39:04.639853] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.468 [2024-07-10 15:39:04.639885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.468 [2024-07-10 15:39:04.651582] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.468 [2024-07-10 15:39:04.651612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.468 [2024-07-10 15:39:04.662749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.468 [2024-07-10 15:39:04.662780] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.468 [2024-07-10 15:39:04.673485] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.468 [2024-07-10 15:39:04.673516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.468 [2024-07-10 15:39:04.684480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.468 [2024-07-10 15:39:04.684511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.468 [2024-07-10 15:39:04.695348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.468 [2024-07-10 15:39:04.695379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.468 [2024-07-10 15:39:04.706620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.468 [2024-07-10 15:39:04.706652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.468 [2024-07-10 15:39:04.717624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.468 [2024-07-10 15:39:04.717655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.468 [2024-07-10 15:39:04.728794] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.468 [2024-07-10 15:39:04.728826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.468 [2024-07-10 15:39:04.739378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.468 [2024-07-10 15:39:04.739410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.468 [2024-07-10 15:39:04.750423] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.468 [2024-07-10 15:39:04.750466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.468 [2024-07-10 15:39:04.761603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.469 [2024-07-10 15:39:04.761634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.469 [2024-07-10 15:39:04.774595] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.469 [2024-07-10 15:39:04.774626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.469 [2024-07-10 15:39:04.784260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.469 [2024-07-10 15:39:04.784298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.469 [2024-07-10 15:39:04.796080] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.469 [2024-07-10 15:39:04.796111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.469 [2024-07-10 15:39:04.807143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.469 [2024-07-10 15:39:04.807175] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.469 [2024-07-10 15:39:04.817850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.469 [2024-07-10 15:39:04.817881] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.469 [2024-07-10 15:39:04.828981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.469 [2024-07-10 15:39:04.829012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.469 [2024-07-10 15:39:04.840255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.469 [2024-07-10 15:39:04.840285] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:04.851243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:04.851275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:04.863984] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:04.864015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:04.874129] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:04.874159] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:04.885706] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:04.885736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:04.896210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:04.896241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:04.907344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:04.907374] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:04.918321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:04.918350] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:04.931159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:04.931190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:04.940858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:04.940889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:04.952542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:04.952572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:04.963391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:04.963421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:04.974300] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:04.974331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:04.985253] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:04.985283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:04.995851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:04.995890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:05.008877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:05.008908] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:05.018705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:05.018735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:05.030321] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:05.030351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:05.041269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:05.041299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:05.053553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:05.053583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:05.063860] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:05.063890] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:05.074973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:05.075004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:05.086167] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:05.086198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.727 [2024-07-10 15:39:05.097413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.727 [2024-07-10 15:39:05.097451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.108724] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.108755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.119301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.119331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.130279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.130309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.141280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.141310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.152533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.152564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.163573] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.163604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.174677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.174708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.185451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.185481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.196122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.196153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.207054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.207084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.218018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.218048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.230907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.230937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.240434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.240465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.251736] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.251767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.262655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.262686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.273377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.273408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.286381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.286412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.296710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.296741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.307615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.307646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.320406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.320446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.330327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.330357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.340824] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.340855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.351960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.351991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:25.987 [2024-07-10 15:39:05.362774] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:25.987 [2024-07-10 15:39:05.362804] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.373790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.373820] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.384283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.384314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.395222] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.395252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.405898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.405928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.416826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.416857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.429561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.429591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.439305] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.439336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.450251] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.450281] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.461038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.461069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.472044] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.472074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.482647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.482677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.493580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.493611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.506169] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.506199] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.516314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.516344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.527598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.527629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.538777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.538807] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.549773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.549803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.560818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.560849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.571906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.571937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.583073] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.583103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.594275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.594306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.607279] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.607310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.246 [2024-07-10 15:39:05.616991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.246 [2024-07-10 15:39:05.617021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.628552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.628582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.639846] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.639889] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.651281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.651312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.662344] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.662375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.675086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.675116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.684627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.684659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.696317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.696347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.707008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.707038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.717619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.717649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.728332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.728363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.739758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.739788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.752802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.752833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.762929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.762960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.774570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.774601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.785596] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.785627] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.798521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.798551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.808938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.808968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.820017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.820048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.831400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.831451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.842332] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.842363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.855186] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.855217] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.865087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.865117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.505 [2024-07-10 15:39:05.876836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.505 [2024-07-10 15:39:05.876866] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:05.888025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:05.888055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:05.899154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:05.899196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:05.910159] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:05.910189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:05.920867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:05.920899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:05.931758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:05.931789] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:05.942675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:05.942705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:05.953600] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:05.953631] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:05.964362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:05.964393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:05.974969] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:05.975000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:05.985978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:05.986009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:05.996775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:05.996805] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:06.009883] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:06.009914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:06.020089] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:06.020120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:06.030861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:06.030891] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:06.041716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:06.041759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:06.052502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:06.052533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:06.063634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:06.063665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:06.074617] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:06.074649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:06.087580] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:06.087611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:06.097385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:06.097415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:06.108677] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:06.108708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:06.119716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:06.119746] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:26.764 [2024-07-10 15:39:06.130702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:26.764 [2024-07-10 15:39:06.130733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.141196] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.141227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.151931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.151962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.164441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.164471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.173838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.173869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.185342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.185373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.198035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.198066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.207890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.207922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.218863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.218893] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.229857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.229888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.240810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.240841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.253269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.253308] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.262759] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.262790] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.273958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.273989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.284226] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.284257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.295473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.295504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.306763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.306794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.317806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.317836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.328698] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.328729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.339887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.339918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.352399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.352439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.361843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.361874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.373346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.373376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.386038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.386068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.023 [2024-07-10 15:39:06.395825] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.023 [2024-07-10 15:39:06.395855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.407195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.407226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.417634] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.417665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.428757] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.428787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.439441] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.439472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.450152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.450183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.460586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.460625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.471566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.471596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.483665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.483695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.493607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.493637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.505166] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.505196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.515746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.515776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.526434] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.526465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.538855] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.538885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.548590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.548620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.559923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.559953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.570605] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.570636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.583238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.583269] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.592705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.592735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.604566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.604596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.617443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.617473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.626945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.626976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.638421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.638464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.282 [2024-07-10 15:39:06.649102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.282 [2024-07-10 15:39:06.649133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.540 [2024-07-10 15:39:06.659826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.540 [2024-07-10 15:39:06.659858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.540 [2024-07-10 15:39:06.670773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.540 [2024-07-10 15:39:06.670812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.540 [2024-07-10 15:39:06.681811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.681842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.692879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.692909] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.704030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.704060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.716970] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.717000] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.726714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.726744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.738127] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.738158] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.748927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.748958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.759732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.759762] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.772438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.772469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.782120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.782151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.793495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.793525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.804241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.804271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.815152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.815183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.827902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.827933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.837244] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.837274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.848805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.848835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.861462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.861503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.871657] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.871687] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.883364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.883395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.893953] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.893983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.904788] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.904818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.541 [2024-07-10 15:39:06.915746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.541 [2024-07-10 15:39:06.915776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:06.926376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:06.926406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:06.937317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:06.937348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:06.948081] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:06.948111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:06.958689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:06.958720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:06.971260] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:06.971291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:06.980516] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:06.980548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:06.992229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:06.992260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:07.003451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:07.003481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:07.014552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:07.014582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:07.025348] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:07.025379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:07.036586] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:07.036616] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:07.047790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:07.047821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:07.058491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:07.058522] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:07.069716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:07.069747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:07.080901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:07.080932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:07.091780] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:07.091812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:07.102709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:07.102739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:07.113818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:07.113848] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:07.125181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:07.125211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:07.136376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:07.136407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:07.147294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:07.147324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:07.158358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:07.158389] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:27.800 [2024-07-10 15:39:07.169292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:27.800 [2024-07-10 15:39:07.169323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.058 [2024-07-10 15:39:07.180805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.058 [2024-07-10 15:39:07.180836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.058 [2024-07-10 15:39:07.191669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.058 [2024-07-10 15:39:07.191700] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.058 [2024-07-10 15:39:07.203901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.203932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.059 [2024-07-10 15:39:07.213597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.213628] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.059 [2024-07-10 15:39:07.225349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.225379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.059 [2024-07-10 15:39:07.235996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.236027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.059 [2024-07-10 15:39:07.247282] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.247312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.059 [2024-07-10 15:39:07.258110] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.258141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.059 [2024-07-10 15:39:07.268616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.268647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.059 [2024-07-10 15:39:07.279037] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.279067] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.059 [2024-07-10 15:39:07.290099] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.290128] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.059 [2024-07-10 15:39:07.303469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.303499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.059 [2024-07-10 15:39:07.313811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.313842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.059 [2024-07-10 15:39:07.324908] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.324939] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.059 [2024-07-10 15:39:07.336093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.336124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.059 [2024-07-10 15:39:07.347433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.347464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.059 [2024-07-10 15:39:07.358798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.358830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.059 [2024-07-10 15:39:07.369978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.370008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.059 [2024-07-10 15:39:07.381051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.381089] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.059 [2024-07-10 15:39:07.392314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.392344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.059 [2024-07-10 15:39:07.404783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.404814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.059 [2024-07-10 15:39:07.414164] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.414205] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.059 [2024-07-10 15:39:07.426299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.059 [2024-07-10 15:39:07.426330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.437181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.437212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.448294] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.448325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.459358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.459390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.470180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.470210] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.480670] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.480701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.491525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.491556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.502306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.502345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.513091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.513121] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.524117] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.524147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.535467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.535498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.546401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.546443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.557844] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.557874] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.568976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.569006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.580342] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.580373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.590994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.591025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.603834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.603865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.613629] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.613659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.624411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.624453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.635195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.635226] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.646368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.646399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.659053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.659095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.669322] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.669354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.681165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.681196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.330 [2024-07-10 15:39:07.692189] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.330 [2024-07-10 15:39:07.692221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.594 [2024-07-10 15:39:07.704910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.594 [2024-07-10 15:39:07.704940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.594 [2024-07-10 15:39:07.714890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.594 [2024-07-10 15:39:07.714928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.594 [2024-07-10 15:39:07.726395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.594 [2024-07-10 15:39:07.726435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.594 [2024-07-10 15:39:07.737230] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.594 [2024-07-10 15:39:07.737260] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.594 [2024-07-10 15:39:07.748114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.594 [2024-07-10 15:39:07.748144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.594 [2024-07-10 15:39:07.761028] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.594 [2024-07-10 15:39:07.761058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.594 [2024-07-10 15:39:07.771093] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.594 [2024-07-10 15:39:07.771123] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.595 [2024-07-10 15:39:07.782735] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.595 [2024-07-10 15:39:07.782766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.595 [2024-07-10 15:39:07.793539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.595 [2024-07-10 15:39:07.793570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.595 [2024-07-10 15:39:07.806273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.595 [2024-07-10 15:39:07.806304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.595 [2024-07-10 15:39:07.816223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.595 [2024-07-10 15:39:07.816253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.595 [2024-07-10 15:39:07.827095] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.595 [2024-07-10 15:39:07.827126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.595 [2024-07-10 15:39:07.838170] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.595 [2024-07-10 15:39:07.838200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.595 [2024-07-10 15:39:07.849341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.595 [2024-07-10 15:39:07.849372] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.595 [2024-07-10 15:39:07.860395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.595 [2024-07-10 15:39:07.860436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.595 [2024-07-10 15:39:07.871296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.595 [2024-07-10 15:39:07.871326] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.595 [2024-07-10 15:39:07.884462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.595 [2024-07-10 15:39:07.884494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.595 [2024-07-10 15:39:07.894613] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.595 [2024-07-10 15:39:07.894644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.595 [2024-07-10 15:39:07.906072] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.595 [2024-07-10 15:39:07.906103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.595 [2024-07-10 15:39:07.917723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.595 [2024-07-10 15:39:07.917754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.595 [2024-07-10 15:39:07.928928] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.595 [2024-07-10 15:39:07.928968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.595 [2024-07-10 15:39:07.940022] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.595 [2024-07-10 15:39:07.940052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.595 [2024-07-10 15:39:07.950923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.595 [2024-07-10 15:39:07.950953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.595 [2024-07-10 15:39:07.962390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.595 [2024-07-10 15:39:07.962422] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.852 [2024-07-10 15:39:07.973915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.852 [2024-07-10 15:39:07.973946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.852 [2024-07-10 15:39:07.984490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.852 [2024-07-10 15:39:07.984521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.852 [2024-07-10 15:39:07.995397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.852 [2024-07-10 15:39:07.995435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.852 [2024-07-10 15:39:08.008745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.853 [2024-07-10 15:39:08.008776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.853 [2024-07-10 15:39:08.019131] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.853 [2024-07-10 15:39:08.019162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.853 [2024-07-10 15:39:08.030158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.853 [2024-07-10 15:39:08.030189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.853 [2024-07-10 15:39:08.043281] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.853 [2024-07-10 15:39:08.043311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.853 [2024-07-10 15:39:08.053558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.853 [2024-07-10 15:39:08.053589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.853 [2024-07-10 15:39:08.065272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.853 [2024-07-10 15:39:08.065303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.853 [2024-07-10 15:39:08.075738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.853 [2024-07-10 15:39:08.075769] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.853 [2024-07-10 15:39:08.086712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.853 [2024-07-10 15:39:08.086743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.853 [2024-07-10 15:39:08.097653] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.853 [2024-07-10 15:39:08.097690] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.853 [2024-07-10 15:39:08.108551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.853 [2024-07-10 15:39:08.108582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.853 [2024-07-10 15:39:08.119544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.853 [2024-07-10 15:39:08.119575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.853 [2024-07-10 15:39:08.130447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.853 [2024-07-10 15:39:08.130477] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.853 [2024-07-10 15:39:08.141293] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.853 [2024-07-10 15:39:08.141331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.853 [2024-07-10 15:39:08.154361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.853 [2024-07-10 15:39:08.154392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.853 [2024-07-10 15:39:08.164980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.853 [2024-07-10 15:39:08.165011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.853 [2024-07-10 15:39:08.175911] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.853 [2024-07-10 15:39:08.175941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.853 [2024-07-10 15:39:08.189304] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.853 [2024-07-10 15:39:08.189336] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.853 [2024-07-10 15:39:08.199971] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.853 [2024-07-10 15:39:08.200001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.853 [2024-07-10 15:39:08.210727] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.853 [2024-07-10 15:39:08.210758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:28.853 [2024-07-10 15:39:08.221914] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:28.853 [2024-07-10 15:39:08.221945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.232778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.232809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.243397] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.243437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.254213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.254244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.264722] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.264753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.275388] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.275419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.288184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.288215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.297809] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.297839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.309349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.309387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.320292] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.320323] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.331616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.331647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.342369] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.342399] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.353062] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.353100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.364029] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.364060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.375027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.375058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.387826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.387856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.397469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.397500] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.408457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.408486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.418921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.418951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.429399] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.429438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.440213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.440243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.451008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.451038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.464043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.464074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.474255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.474287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.111 [2024-07-10 15:39:08.485311] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.111 [2024-07-10 15:39:08.485342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.496355] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.496386] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.507162] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.507193] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.517692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.517722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.528565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.528595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.541133] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.541163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.550921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.550951] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.562151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.562181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.574901] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.574932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.584207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.584238] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.595692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.595723] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.606291] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.606322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.617118] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.617149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.629808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.629838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.639376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.639417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.650929] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.650960] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.661793] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.661823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.672765] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.672796] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.685255] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.685286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.694894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.694925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.706453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.706484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.717259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.717291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.728525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.728556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.370 [2024-07-10 15:39:08.739049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.370 [2024-07-10 15:39:08.739079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.749994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.750024] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.761035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.761065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.772193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.772224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.782932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.782962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.794035] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.794066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.804933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.804964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.816195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.816225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.827141] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.827171] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.838006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.838037] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.850888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.850919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.860998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.861028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.872548] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.872578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.883069] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.883100] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.893811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.893842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.904748] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.904779] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.915683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.915719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.926822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.926853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.938014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.938045] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.948980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.949011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.959851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.959882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.970393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.970423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.980873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.980903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:08.991474] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:08.991504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.629 [2024-07-10 15:39:09.002414] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.629 [2024-07-10 15:39:09.002454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.014998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.888 [2024-07-10 15:39:09.015028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.025041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.888 [2024-07-10 15:39:09.025071] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.036361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.888 [2024-07-10 15:39:09.036391] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.047876] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.888 [2024-07-10 15:39:09.047907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.058753] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.888 [2024-07-10 15:39:09.058785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.075564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.888 [2024-07-10 15:39:09.075597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.085689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.888 [2024-07-10 15:39:09.085720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.097102] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.888 [2024-07-10 15:39:09.097133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.108489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.888 [2024-07-10 15:39:09.108519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.119351] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.888 [2024-07-10 15:39:09.119382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.130566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.888 [2024-07-10 15:39:09.130597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.141651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.888 [2024-07-10 15:39:09.141683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.152845] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.888 [2024-07-10 15:39:09.152876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.163564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.888 [2024-07-10 15:39:09.163595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.174647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.888 [2024-07-10 15:39:09.174679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.187176] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.888 [2024-07-10 15:39:09.187218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.197134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.888 [2024-07-10 15:39:09.197164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.208746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.888 [2024-07-10 15:39:09.208776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.219881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.888 [2024-07-10 15:39:09.219912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.230887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.888 [2024-07-10 15:39:09.230918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.888 [2024-07-10 15:39:09.242313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.889 [2024-07-10 15:39:09.242344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:29.889 [2024-07-10 15:39:09.253362] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:29.889 [2024-07-10 15:39:09.253392] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.264518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.264549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.275409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.275449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.286529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.286560] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.299403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.299448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.309471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.309502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.321456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.321495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.332459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.332495] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.344943] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.344975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.355026] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.355057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.367132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.367162] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.377857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.377887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.388647] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.388677] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.401010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.401049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.410857] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.410887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.422482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.422512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.433661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.433693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.444479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.444510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.455336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.455366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.466216] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.466247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.477587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.477618] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.490145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.490176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.499903] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.499935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.511418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.511458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.148 [2024-07-10 15:39:09.522488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.148 [2024-07-10 15:39:09.522519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.533566] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.533596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.544565] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.544595] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.555604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.555634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.566375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.566405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.575772] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.575802] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 00:15:30.407 Latency(us) 00:15:30.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:30.407 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:30.407 Nvme1n1 : 5.01 11616.80 90.76 0.00 0.00 11004.40 4757.43 21845.33 00:15:30.407 =================================================================================================================== 00:15:30.407 Total : 11616.80 90.76 0.00 0.00 11004.40 4757.43 21845.33 00:15:30.407 [2024-07-10 15:39:09.582290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.582320] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.590308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.590338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.598328] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.598356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.606406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.606466] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.614443] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.614504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.622448] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.622504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.630470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.630516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.638491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.638537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.646520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.646568] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.654541] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.654588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.662551] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.662596] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.670577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.670623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.678611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.678656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.686628] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.686679] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.694641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.694688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.702662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.702706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.710684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.710729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.718712] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.718766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.726701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.726739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.734719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.734745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.742739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.742765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.750760] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.750787] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.758783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.758809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.766850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.766895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.407 [2024-07-10 15:39:09.774873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.407 [2024-07-10 15:39:09.774919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.665 [2024-07-10 15:39:09.782879] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.665 [2024-07-10 15:39:09.782916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.665 [2024-07-10 15:39:09.790874] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.665 [2024-07-10 15:39:09.790901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.666 [2024-07-10 15:39:09.798893] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.666 [2024-07-10 15:39:09.798919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.666 [2024-07-10 15:39:09.806915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.666 [2024-07-10 15:39:09.806942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.666 [2024-07-10 15:39:09.814936] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.666 [2024-07-10 15:39:09.814963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.666 [2024-07-10 15:39:09.822978] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.666 [2024-07-10 15:39:09.823011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.666 [2024-07-10 15:39:09.831025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.666 [2024-07-10 15:39:09.831070] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.666 [2024-07-10 15:39:09.839051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.666 [2024-07-10 15:39:09.839095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.666 [2024-07-10 15:39:09.847031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.666 [2024-07-10 15:39:09.847058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.666 [2024-07-10 15:39:09.855053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.666 [2024-07-10 15:39:09.855079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.666 [2024-07-10 15:39:09.863074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:30.666 [2024-07-10 15:39:09.863101] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:30.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2111481) - No such process 00:15:30.666 15:39:09 -- target/zcopy.sh@49 -- # wait 2111481 00:15:30.666 15:39:09 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:30.666 15:39:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.666 15:39:09 -- common/autotest_common.sh@10 -- # set +x 00:15:30.666 15:39:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.666 15:39:09 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:30.666 15:39:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.666 15:39:09 -- common/autotest_common.sh@10 -- # set +x 00:15:30.666 delay0 00:15:30.666 15:39:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.666 15:39:09 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:30.666 15:39:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:30.666 15:39:09 -- common/autotest_common.sh@10 -- # set +x 00:15:30.666 15:39:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:30.666 15:39:09 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:30.666 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.666 [2024-07-10 15:39:10.025618] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:15:38.771 Initializing NVMe Controllers 00:15:38.771 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:38.771 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:38.771 Initialization complete. Launching workers. 00:15:38.771 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3009 00:15:38.771 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3278, failed to submit 51 00:15:38.771 success 3123, unsuccess 155, failed 0 00:15:38.771 15:39:16 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:15:38.771 15:39:16 -- target/zcopy.sh@60 -- # nvmftestfini 00:15:38.771 15:39:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:38.771 15:39:16 -- nvmf/common.sh@116 -- # sync 00:15:38.771 15:39:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:38.771 15:39:16 -- nvmf/common.sh@119 -- # set +e 00:15:38.771 15:39:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:38.771 15:39:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:38.771 rmmod nvme_tcp 00:15:38.771 rmmod nvme_fabrics 00:15:38.771 rmmod nvme_keyring 00:15:38.771 15:39:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:38.771 15:39:16 -- nvmf/common.sh@123 -- # set -e 00:15:38.771 15:39:16 -- nvmf/common.sh@124 -- # return 0 00:15:38.771 15:39:16 -- nvmf/common.sh@477 -- # '[' -n 2109859 ']' 00:15:38.771 15:39:16 -- nvmf/common.sh@478 -- # killprocess 2109859 00:15:38.771 15:39:16 -- common/autotest_common.sh@926 -- # '[' -z 2109859 ']' 00:15:38.771 15:39:16 -- common/autotest_common.sh@930 -- # kill -0 2109859 00:15:38.771 15:39:16 -- common/autotest_common.sh@931 -- # uname 00:15:38.771 15:39:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:38.771 15:39:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2109859 00:15:38.771 15:39:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:38.771 15:39:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:38.771 15:39:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2109859' 00:15:38.772 killing process with pid 2109859 00:15:38.772 15:39:16 -- common/autotest_common.sh@945 -- # kill 2109859 00:15:38.772 15:39:16 -- common/autotest_common.sh@950 -- # wait 2109859 00:15:38.772 15:39:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:38.772 15:39:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:38.772 15:39:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:38.772 15:39:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:38.772 15:39:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:38.772 15:39:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.772 15:39:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:38.772 15:39:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.149 15:39:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:40.149 00:15:40.149 real 0m29.390s 00:15:40.149 user 0m37.254s 00:15:40.149 sys 0m10.142s 00:15:40.149 15:39:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:40.149 15:39:19 -- common/autotest_common.sh@10 -- # set +x 00:15:40.149 ************************************ 00:15:40.149 END TEST nvmf_zcopy 00:15:40.149 ************************************ 00:15:40.149 15:39:19 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:40.149 15:39:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:40.149 15:39:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:40.149 15:39:19 -- common/autotest_common.sh@10 -- # set +x 00:15:40.149 ************************************ 00:15:40.149 START TEST nvmf_nmic 00:15:40.149 ************************************ 00:15:40.149 15:39:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:15:40.149 * Looking for test storage... 00:15:40.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:40.149 15:39:19 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.149 15:39:19 -- nvmf/common.sh@7 -- # uname -s 00:15:40.149 15:39:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.149 15:39:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.149 15:39:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.149 15:39:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.149 15:39:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.149 15:39:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.149 15:39:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.149 15:39:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.149 15:39:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.149 15:39:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.149 15:39:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:40.149 15:39:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:40.149 15:39:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.149 15:39:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.149 15:39:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.149 15:39:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:40.149 15:39:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.149 15:39:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.149 15:39:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.149 15:39:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.150 15:39:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.150 15:39:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.150 15:39:19 -- paths/export.sh@5 -- # export PATH 00:15:40.150 15:39:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.150 15:39:19 -- nvmf/common.sh@46 -- # : 0 00:15:40.150 15:39:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:40.150 15:39:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:40.150 15:39:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:40.150 15:39:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.150 15:39:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.150 15:39:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:40.150 15:39:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:40.150 15:39:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:40.150 15:39:19 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:40.150 15:39:19 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:40.150 15:39:19 -- target/nmic.sh@14 -- # nvmftestinit 00:15:40.150 15:39:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:40.150 15:39:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.150 15:39:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:40.150 15:39:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:40.150 15:39:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:40.150 15:39:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.150 15:39:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.150 15:39:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.150 15:39:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:40.150 15:39:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:40.150 15:39:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:40.150 15:39:19 -- common/autotest_common.sh@10 -- # set +x 00:15:42.051 15:39:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:42.051 15:39:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:42.051 15:39:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:42.051 15:39:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:42.051 15:39:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:42.051 15:39:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:42.051 15:39:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:42.051 15:39:21 -- nvmf/common.sh@294 -- # net_devs=() 00:15:42.051 15:39:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:42.051 15:39:21 -- nvmf/common.sh@295 -- # e810=() 00:15:42.051 15:39:21 -- nvmf/common.sh@295 -- # local -ga e810 00:15:42.051 15:39:21 -- nvmf/common.sh@296 -- # x722=() 00:15:42.051 15:39:21 -- nvmf/common.sh@296 -- # local -ga x722 00:15:42.051 15:39:21 -- nvmf/common.sh@297 -- # mlx=() 00:15:42.051 15:39:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:42.051 15:39:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:42.051 15:39:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:42.051 15:39:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:42.051 15:39:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:42.051 15:39:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:42.051 15:39:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:42.051 15:39:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:42.051 15:39:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:42.051 15:39:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:42.051 15:39:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:42.051 15:39:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:42.051 15:39:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:42.051 15:39:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:42.051 15:39:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:42.051 15:39:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:42.051 15:39:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:42.051 15:39:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:42.051 15:39:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:42.051 15:39:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:42.051 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:42.051 15:39:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:42.051 15:39:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:42.051 15:39:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.051 15:39:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.051 15:39:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:42.051 15:39:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:42.051 15:39:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:42.051 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:42.051 15:39:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:42.051 15:39:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:42.051 15:39:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.051 15:39:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.051 15:39:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:42.051 15:39:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:42.051 15:39:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:42.051 15:39:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:42.051 15:39:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:42.051 15:39:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.051 15:39:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:42.051 15:39:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.051 15:39:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:42.051 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:42.051 15:39:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.051 15:39:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:42.051 15:39:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.051 15:39:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:42.051 15:39:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.051 15:39:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:42.051 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:42.051 15:39:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.051 15:39:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:42.051 15:39:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:42.051 15:39:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:42.051 15:39:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:42.051 15:39:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:42.051 15:39:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.051 15:39:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:42.051 15:39:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:42.051 15:39:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:42.051 15:39:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:42.051 15:39:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:42.051 15:39:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:42.051 15:39:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:42.051 15:39:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.051 15:39:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:42.051 15:39:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:42.051 15:39:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:42.051 15:39:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:42.337 15:39:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:42.337 15:39:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:42.337 15:39:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:42.337 15:39:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:42.337 15:39:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:42.337 15:39:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:42.337 15:39:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:42.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:15:42.337 00:15:42.337 --- 10.0.0.2 ping statistics --- 00:15:42.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.337 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:15:42.337 15:39:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:42.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:15:42.337 00:15:42.337 --- 10.0.0.1 ping statistics --- 00:15:42.338 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.338 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:15:42.338 15:39:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.338 15:39:21 -- nvmf/common.sh@410 -- # return 0 00:15:42.338 15:39:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:42.338 15:39:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.338 15:39:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:42.338 15:39:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:42.338 15:39:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.338 15:39:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:42.338 15:39:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:42.338 15:39:21 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:15:42.338 15:39:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:42.338 15:39:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:42.338 15:39:21 -- common/autotest_common.sh@10 -- # set +x 00:15:42.338 15:39:21 -- nvmf/common.sh@469 -- # nvmfpid=2115423 00:15:42.338 15:39:21 -- nvmf/common.sh@470 -- # waitforlisten 2115423 00:15:42.338 15:39:21 -- common/autotest_common.sh@819 -- # '[' -z 2115423 ']' 00:15:42.338 15:39:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.338 15:39:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:42.338 15:39:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:42.338 15:39:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.338 15:39:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:42.338 15:39:21 -- common/autotest_common.sh@10 -- # set +x 00:15:42.338 [2024-07-10 15:39:21.581259] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:42.338 [2024-07-10 15:39:21.581347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:42.338 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.338 [2024-07-10 15:39:21.649369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:42.619 [2024-07-10 15:39:21.774619] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:42.619 [2024-07-10 15:39:21.774780] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:42.619 [2024-07-10 15:39:21.774799] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:42.619 [2024-07-10 15:39:21.774813] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:42.619 [2024-07-10 15:39:21.774876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:42.619 [2024-07-10 15:39:21.774938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:42.619 [2024-07-10 15:39:21.774965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:42.619 [2024-07-10 15:39:21.774969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.184 15:39:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:43.184 15:39:22 -- common/autotest_common.sh@852 -- # return 0 00:15:43.184 15:39:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:43.184 15:39:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:43.184 15:39:22 -- common/autotest_common.sh@10 -- # set +x 00:15:43.184 15:39:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.184 15:39:22 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:43.184 15:39:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.184 15:39:22 -- common/autotest_common.sh@10 -- # set +x 00:15:43.184 [2024-07-10 15:39:22.547834] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.184 15:39:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.184 15:39:22 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:43.184 15:39:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.184 15:39:22 -- common/autotest_common.sh@10 -- # set +x 00:15:43.442 Malloc0 00:15:43.442 15:39:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.442 15:39:22 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:43.442 15:39:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.442 15:39:22 -- common/autotest_common.sh@10 -- # set +x 00:15:43.442 15:39:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.442 15:39:22 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:43.442 15:39:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.442 15:39:22 -- common/autotest_common.sh@10 -- # set +x 00:15:43.442 15:39:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.442 15:39:22 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:43.442 15:39:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.442 15:39:22 -- common/autotest_common.sh@10 -- # set +x 00:15:43.442 [2024-07-10 15:39:22.601370] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:43.442 15:39:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.442 15:39:22 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:15:43.442 test case1: single bdev can't be used in multiple subsystems 00:15:43.442 15:39:22 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:15:43.442 15:39:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.442 15:39:22 -- common/autotest_common.sh@10 -- # set +x 00:15:43.442 15:39:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.442 15:39:22 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:43.442 15:39:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.442 15:39:22 -- common/autotest_common.sh@10 -- # set +x 00:15:43.442 15:39:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.442 15:39:22 -- target/nmic.sh@28 -- # nmic_status=0 00:15:43.442 15:39:22 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:15:43.442 15:39:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.442 15:39:22 -- common/autotest_common.sh@10 -- # set +x 00:15:43.442 [2024-07-10 15:39:22.625235] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:15:43.442 [2024-07-10 15:39:22.625263] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:15:43.442 [2024-07-10 15:39:22.625276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:43.442 request: 00:15:43.442 { 00:15:43.442 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:15:43.442 "namespace": { 00:15:43.442 "bdev_name": "Malloc0" 00:15:43.442 }, 00:15:43.442 "method": "nvmf_subsystem_add_ns", 00:15:43.442 "req_id": 1 00:15:43.442 } 00:15:43.442 Got JSON-RPC error response 00:15:43.442 response: 00:15:43.442 { 00:15:43.442 "code": -32602, 00:15:43.442 "message": "Invalid parameters" 00:15:43.442 } 00:15:43.442 15:39:22 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:15:43.442 15:39:22 -- target/nmic.sh@29 -- # nmic_status=1 00:15:43.443 15:39:22 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:15:43.443 15:39:22 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:15:43.443 Adding namespace failed - expected result. 00:15:43.443 15:39:22 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:15:43.443 test case2: host connect to nvmf target in multiple paths 00:15:43.443 15:39:22 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:43.443 15:39:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:43.443 15:39:22 -- common/autotest_common.sh@10 -- # set +x 00:15:43.443 [2024-07-10 15:39:22.633352] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:43.443 15:39:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:43.443 15:39:22 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:44.008 15:39:23 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:15:44.572 15:39:23 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:15:44.572 15:39:23 -- common/autotest_common.sh@1177 -- # local i=0 00:15:44.572 15:39:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:44.572 15:39:23 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:44.572 15:39:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:47.097 15:39:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:47.097 15:39:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:47.097 15:39:25 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:47.097 15:39:25 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:47.097 15:39:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:47.097 15:39:25 -- common/autotest_common.sh@1187 -- # return 0 00:15:47.097 15:39:25 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:15:47.097 [global] 00:15:47.097 thread=1 00:15:47.097 invalidate=1 00:15:47.097 rw=write 00:15:47.097 time_based=1 00:15:47.097 runtime=1 00:15:47.097 ioengine=libaio 00:15:47.097 direct=1 00:15:47.097 bs=4096 00:15:47.097 iodepth=1 00:15:47.097 norandommap=0 00:15:47.097 numjobs=1 00:15:47.097 00:15:47.097 verify_dump=1 00:15:47.097 verify_backlog=512 00:15:47.097 verify_state_save=0 00:15:47.097 do_verify=1 00:15:47.097 verify=crc32c-intel 00:15:47.097 [job0] 00:15:47.097 filename=/dev/nvme0n1 00:15:47.097 Could not set queue depth (nvme0n1) 00:15:47.097 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:15:47.097 fio-3.35 00:15:47.097 Starting 1 thread 00:15:48.028 00:15:48.028 job0: (groupid=0, jobs=1): err= 0: pid=2116088: Wed Jul 10 15:39:27 2024 00:15:48.028 read: IOPS=20, BW=82.3KiB/s (84.2kB/s)(84.0KiB/1021msec) 00:15:48.028 slat (nsec): min=13446, max=37766, avg=25375.90, stdev=9903.48 00:15:48.028 clat (usec): min=40879, max=41983, avg=41402.85, stdev=496.01 00:15:48.028 lat (usec): min=40894, max=42012, avg=41428.23, stdev=498.53 00:15:48.028 clat percentiles (usec): 00:15:48.028 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:15:48.028 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:15:48.028 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:15:48.028 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:15:48.028 | 99.99th=[42206] 00:15:48.028 write: IOPS=501, BW=2006KiB/s (2054kB/s)(2048KiB/1021msec); 0 zone resets 00:15:48.028 slat (usec): min=9, max=28847, avg=73.92, stdev=1274.13 00:15:48.028 clat (usec): min=180, max=318, avg=215.14, stdev=20.46 00:15:48.028 lat (usec): min=192, max=29074, avg=289.06, stdev=1274.88 00:15:48.028 clat percentiles (usec): 00:15:48.028 | 1.00th=[ 186], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:15:48.028 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 215], 60.00th=[ 219], 00:15:48.028 | 70.00th=[ 225], 80.00th=[ 231], 90.00th=[ 239], 95.00th=[ 247], 00:15:48.028 | 99.00th=[ 289], 99.50th=[ 310], 99.90th=[ 318], 99.95th=[ 318], 00:15:48.028 | 99.99th=[ 318] 00:15:48.028 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:15:48.028 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:15:48.028 lat (usec) : 250=91.93%, 500=4.13% 00:15:48.028 lat (msec) : 50=3.94% 00:15:48.028 cpu : usr=0.59%, sys=1.08%, ctx=537, majf=0, minf=2 00:15:48.028 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:48.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.028 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:48.028 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:48.028 00:15:48.028 Run status group 0 (all jobs): 00:15:48.028 READ: bw=82.3KiB/s (84.2kB/s), 82.3KiB/s-82.3KiB/s (84.2kB/s-84.2kB/s), io=84.0KiB (86.0kB), run=1021-1021msec 00:15:48.028 WRITE: bw=2006KiB/s (2054kB/s), 2006KiB/s-2006KiB/s (2054kB/s-2054kB/s), io=2048KiB (2097kB), run=1021-1021msec 00:15:48.028 00:15:48.028 Disk stats (read/write): 00:15:48.028 nvme0n1: ios=44/512, merge=0/0, ticks=1737/97, in_queue=1834, util=98.60% 00:15:48.028 15:39:27 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:48.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:48.286 15:39:27 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:48.286 15:39:27 -- common/autotest_common.sh@1198 -- # local i=0 00:15:48.286 15:39:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:48.286 15:39:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:48.286 15:39:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:48.286 15:39:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:48.286 15:39:27 -- common/autotest_common.sh@1210 -- # return 0 00:15:48.286 15:39:27 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:15:48.286 15:39:27 -- target/nmic.sh@53 -- # nvmftestfini 00:15:48.286 15:39:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:48.286 15:39:27 -- nvmf/common.sh@116 -- # sync 00:15:48.286 15:39:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:48.286 15:39:27 -- nvmf/common.sh@119 -- # set +e 00:15:48.286 15:39:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:48.286 15:39:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:48.286 rmmod nvme_tcp 00:15:48.286 rmmod nvme_fabrics 00:15:48.286 rmmod nvme_keyring 00:15:48.286 15:39:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:48.286 15:39:27 -- nvmf/common.sh@123 -- # set -e 00:15:48.286 15:39:27 -- nvmf/common.sh@124 -- # return 0 00:15:48.286 15:39:27 -- nvmf/common.sh@477 -- # '[' -n 2115423 ']' 00:15:48.286 15:39:27 -- nvmf/common.sh@478 -- # killprocess 2115423 00:15:48.286 15:39:27 -- common/autotest_common.sh@926 -- # '[' -z 2115423 ']' 00:15:48.286 15:39:27 -- common/autotest_common.sh@930 -- # kill -0 2115423 00:15:48.286 15:39:27 -- common/autotest_common.sh@931 -- # uname 00:15:48.286 15:39:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:48.286 15:39:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2115423 00:15:48.286 15:39:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:48.286 15:39:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:48.286 15:39:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2115423' 00:15:48.286 killing process with pid 2115423 00:15:48.286 15:39:27 -- common/autotest_common.sh@945 -- # kill 2115423 00:15:48.286 15:39:27 -- common/autotest_common.sh@950 -- # wait 2115423 00:15:48.544 15:39:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:48.544 15:39:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:48.544 15:39:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:48.544 15:39:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:48.544 15:39:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:48.544 15:39:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:48.544 15:39:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:48.544 15:39:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.076 15:39:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:51.076 00:15:51.076 real 0m10.587s 00:15:51.076 user 0m25.061s 00:15:51.076 sys 0m2.396s 00:15:51.076 15:39:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:51.076 15:39:29 -- common/autotest_common.sh@10 -- # set +x 00:15:51.076 ************************************ 00:15:51.076 END TEST nvmf_nmic 00:15:51.076 ************************************ 00:15:51.076 15:39:29 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:51.076 15:39:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:51.076 15:39:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:51.076 15:39:29 -- common/autotest_common.sh@10 -- # set +x 00:15:51.076 ************************************ 00:15:51.076 START TEST nvmf_fio_target 00:15:51.076 ************************************ 00:15:51.076 15:39:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:15:51.076 * Looking for test storage... 00:15:51.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:51.076 15:39:29 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:51.076 15:39:29 -- nvmf/common.sh@7 -- # uname -s 00:15:51.076 15:39:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.076 15:39:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.076 15:39:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.076 15:39:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.076 15:39:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.076 15:39:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.076 15:39:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.076 15:39:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.076 15:39:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.076 15:39:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.076 15:39:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:51.076 15:39:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:51.076 15:39:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.076 15:39:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.076 15:39:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:51.076 15:39:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:51.076 15:39:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.076 15:39:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.076 15:39:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.076 15:39:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.076 15:39:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.076 15:39:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.076 15:39:29 -- paths/export.sh@5 -- # export PATH 00:15:51.076 15:39:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.076 15:39:29 -- nvmf/common.sh@46 -- # : 0 00:15:51.076 15:39:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:51.076 15:39:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:51.076 15:39:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:51.076 15:39:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.076 15:39:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.076 15:39:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:51.076 15:39:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:51.076 15:39:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:51.076 15:39:29 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:51.076 15:39:29 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:51.076 15:39:29 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:51.076 15:39:29 -- target/fio.sh@16 -- # nvmftestinit 00:15:51.076 15:39:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:51.076 15:39:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.076 15:39:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:51.076 15:39:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:51.076 15:39:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:51.076 15:39:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.076 15:39:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.076 15:39:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.076 15:39:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:51.076 15:39:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:51.076 15:39:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:51.076 15:39:29 -- common/autotest_common.sh@10 -- # set +x 00:15:52.976 15:39:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:52.976 15:39:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:52.976 15:39:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:52.976 15:39:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:52.976 15:39:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:52.976 15:39:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:52.976 15:39:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:52.976 15:39:31 -- nvmf/common.sh@294 -- # net_devs=() 00:15:52.976 15:39:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:52.976 15:39:31 -- nvmf/common.sh@295 -- # e810=() 00:15:52.976 15:39:31 -- nvmf/common.sh@295 -- # local -ga e810 00:15:52.976 15:39:31 -- nvmf/common.sh@296 -- # x722=() 00:15:52.976 15:39:31 -- nvmf/common.sh@296 -- # local -ga x722 00:15:52.976 15:39:31 -- nvmf/common.sh@297 -- # mlx=() 00:15:52.976 15:39:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:52.976 15:39:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:52.976 15:39:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:52.976 15:39:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:52.976 15:39:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:52.976 15:39:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:52.976 15:39:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:52.976 15:39:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:52.976 15:39:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:52.976 15:39:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:52.976 15:39:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:52.976 15:39:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:52.976 15:39:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:52.976 15:39:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:52.976 15:39:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:52.976 15:39:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:52.976 15:39:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:52.976 15:39:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:52.976 15:39:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:52.976 15:39:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:52.976 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:52.976 15:39:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:52.976 15:39:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:52.976 15:39:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.976 15:39:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.976 15:39:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:52.976 15:39:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:52.976 15:39:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:52.976 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:52.976 15:39:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:52.976 15:39:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:52.976 15:39:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:52.976 15:39:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:52.976 15:39:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:52.976 15:39:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:52.976 15:39:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:52.976 15:39:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:52.976 15:39:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:52.976 15:39:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.976 15:39:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:52.976 15:39:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.976 15:39:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:52.976 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:52.976 15:39:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.976 15:39:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:52.976 15:39:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:52.976 15:39:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:52.976 15:39:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:52.976 15:39:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:52.976 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:52.976 15:39:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:52.976 15:39:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:52.976 15:39:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:52.976 15:39:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:52.976 15:39:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:52.976 15:39:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:52.976 15:39:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:52.976 15:39:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:52.976 15:39:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:52.976 15:39:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:52.976 15:39:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:52.976 15:39:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:52.976 15:39:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:52.976 15:39:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:52.976 15:39:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:52.976 15:39:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:52.976 15:39:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:52.976 15:39:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:52.976 15:39:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:52.976 15:39:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:52.976 15:39:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:52.976 15:39:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:52.976 15:39:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:52.976 15:39:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:52.976 15:39:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:52.976 15:39:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:52.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:52.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:15:52.976 00:15:52.976 --- 10.0.0.2 ping statistics --- 00:15:52.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.976 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:15:52.976 15:39:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:52.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:52.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:15:52.976 00:15:52.976 --- 10.0.0.1 ping statistics --- 00:15:52.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:52.976 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:15:52.977 15:39:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:52.977 15:39:32 -- nvmf/common.sh@410 -- # return 0 00:15:52.977 15:39:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:52.977 15:39:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:52.977 15:39:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:52.977 15:39:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:52.977 15:39:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:52.977 15:39:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:52.977 15:39:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:52.977 15:39:32 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:15:52.977 15:39:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:52.977 15:39:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:52.977 15:39:32 -- common/autotest_common.sh@10 -- # set +x 00:15:52.977 15:39:32 -- nvmf/common.sh@469 -- # nvmfpid=2118298 00:15:52.977 15:39:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:52.977 15:39:32 -- nvmf/common.sh@470 -- # waitforlisten 2118298 00:15:52.977 15:39:32 -- common/autotest_common.sh@819 -- # '[' -z 2118298 ']' 00:15:52.977 15:39:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.977 15:39:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:52.977 15:39:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.977 15:39:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:52.977 15:39:32 -- common/autotest_common.sh@10 -- # set +x 00:15:52.977 [2024-07-10 15:39:32.164380] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:52.977 [2024-07-10 15:39:32.164479] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:52.977 EAL: No free 2048 kB hugepages reported on node 1 00:15:52.977 [2024-07-10 15:39:32.233616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:53.234 [2024-07-10 15:39:32.354043] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:53.234 [2024-07-10 15:39:32.354213] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:53.234 [2024-07-10 15:39:32.354238] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:53.234 [2024-07-10 15:39:32.354252] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:53.234 [2024-07-10 15:39:32.354340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.234 [2024-07-10 15:39:32.354367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.234 [2024-07-10 15:39:32.354420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:53.234 [2024-07-10 15:39:32.354423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.808 15:39:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:53.808 15:39:33 -- common/autotest_common.sh@852 -- # return 0 00:15:53.808 15:39:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:53.808 15:39:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:53.808 15:39:33 -- common/autotest_common.sh@10 -- # set +x 00:15:53.808 15:39:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:53.808 15:39:33 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:54.064 [2024-07-10 15:39:33.313441] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:54.064 15:39:33 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:54.321 15:39:33 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:15:54.321 15:39:33 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:54.579 15:39:33 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:15:54.579 15:39:33 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:54.837 15:39:34 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:15:54.837 15:39:34 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:55.095 15:39:34 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:15:55.095 15:39:34 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:15:55.353 15:39:34 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:55.611 15:39:34 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:15:55.611 15:39:34 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:55.869 15:39:35 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:15:55.869 15:39:35 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:56.128 15:39:35 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:15:56.128 15:39:35 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:15:56.386 15:39:35 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:56.644 15:39:35 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:56.644 15:39:35 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:56.901 15:39:36 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:15:56.901 15:39:36 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:57.159 15:39:36 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.416 [2024-07-10 15:39:36.538859] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.416 15:39:36 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:15:57.416 15:39:36 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:15:57.673 15:39:37 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:58.606 15:39:37 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:15:58.606 15:39:37 -- common/autotest_common.sh@1177 -- # local i=0 00:15:58.606 15:39:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:58.606 15:39:37 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:15:58.606 15:39:37 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:15:58.606 15:39:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:00.503 15:39:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:00.503 15:39:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:00.503 15:39:39 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:00.503 15:39:39 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:16:00.503 15:39:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:00.503 15:39:39 -- common/autotest_common.sh@1187 -- # return 0 00:16:00.503 15:39:39 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:00.503 [global] 00:16:00.503 thread=1 00:16:00.503 invalidate=1 00:16:00.503 rw=write 00:16:00.503 time_based=1 00:16:00.503 runtime=1 00:16:00.503 ioengine=libaio 00:16:00.503 direct=1 00:16:00.503 bs=4096 00:16:00.503 iodepth=1 00:16:00.503 norandommap=0 00:16:00.503 numjobs=1 00:16:00.503 00:16:00.503 verify_dump=1 00:16:00.503 verify_backlog=512 00:16:00.503 verify_state_save=0 00:16:00.503 do_verify=1 00:16:00.503 verify=crc32c-intel 00:16:00.503 [job0] 00:16:00.503 filename=/dev/nvme0n1 00:16:00.503 [job1] 00:16:00.503 filename=/dev/nvme0n2 00:16:00.503 [job2] 00:16:00.503 filename=/dev/nvme0n3 00:16:00.503 [job3] 00:16:00.503 filename=/dev/nvme0n4 00:16:00.503 Could not set queue depth (nvme0n1) 00:16:00.503 Could not set queue depth (nvme0n2) 00:16:00.503 Could not set queue depth (nvme0n3) 00:16:00.503 Could not set queue depth (nvme0n4) 00:16:00.761 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.761 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.761 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.761 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:00.761 fio-3.35 00:16:00.761 Starting 4 threads 00:16:02.135 00:16:02.135 job0: (groupid=0, jobs=1): err= 0: pid=2119389: Wed Jul 10 15:39:41 2024 00:16:02.135 read: IOPS=19, BW=78.4KiB/s (80.3kB/s)(80.0KiB/1020msec) 00:16:02.135 slat (nsec): min=13945, max=35133, avg=26475.95, stdev=9533.25 00:16:02.135 clat (usec): min=40975, max=42291, avg=41886.02, stdev=322.45 00:16:02.135 lat (usec): min=40989, max=42307, avg=41912.49, stdev=324.72 00:16:02.135 clat percentiles (usec): 00:16:02.135 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:16:02.135 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:16:02.135 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:16:02.135 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:02.135 | 99.99th=[42206] 00:16:02.135 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:16:02.135 slat (usec): min=6, max=14065, avg=48.67, stdev=620.82 00:16:02.135 clat (usec): min=178, max=3401, avg=296.66, stdev=164.64 00:16:02.135 lat (usec): min=186, max=14584, avg=345.33, stdev=652.22 00:16:02.135 clat percentiles (usec): 00:16:02.135 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 206], 00:16:02.135 | 30.00th=[ 227], 40.00th=[ 245], 50.00th=[ 262], 60.00th=[ 281], 00:16:02.135 | 70.00th=[ 334], 80.00th=[ 392], 90.00th=[ 420], 95.00th=[ 474], 00:16:02.135 | 99.00th=[ 519], 99.50th=[ 570], 99.90th=[ 3392], 99.95th=[ 3392], 00:16:02.135 | 99.99th=[ 3392] 00:16:02.135 bw ( KiB/s): min= 4096, max= 4096, per=28.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:02.135 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:02.135 lat (usec) : 250=42.11%, 500=51.88%, 750=2.07% 00:16:02.135 lat (msec) : 4=0.19%, 50=3.76% 00:16:02.135 cpu : usr=0.10%, sys=1.47%, ctx=536, majf=0, minf=1 00:16:02.135 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:02.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.135 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.135 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.135 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:02.135 job1: (groupid=0, jobs=1): err= 0: pid=2119407: Wed Jul 10 15:39:41 2024 00:16:02.135 read: IOPS=20, BW=81.6KiB/s (83.5kB/s)(84.0KiB/1030msec) 00:16:02.135 slat (nsec): min=12929, max=47238, avg=27009.48, stdev=9914.95 00:16:02.135 clat (usec): min=40882, max=42046, avg=41034.54, stdev=260.84 00:16:02.135 lat (usec): min=40918, max=42063, avg=41061.55, stdev=258.08 00:16:02.135 clat percentiles (usec): 00:16:02.135 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:02.135 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:16:02.135 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:16:02.135 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:16:02.135 | 99.99th=[42206] 00:16:02.135 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:16:02.135 slat (nsec): min=6720, max=77286, avg=21328.18, stdev=11011.13 00:16:02.135 clat (usec): min=189, max=593, avg=296.44, stdev=71.30 00:16:02.135 lat (usec): min=197, max=605, avg=317.77, stdev=70.73 00:16:02.135 clat percentiles (usec): 00:16:02.135 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 223], 20.00th=[ 241], 00:16:02.135 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 289], 00:16:02.135 | 70.00th=[ 314], 80.00th=[ 359], 90.00th=[ 400], 95.00th=[ 429], 00:16:02.135 | 99.00th=[ 529], 99.50th=[ 562], 99.90th=[ 594], 99.95th=[ 594], 00:16:02.135 | 99.99th=[ 594] 00:16:02.135 bw ( KiB/s): min= 4096, max= 4096, per=28.00%, avg=4096.00, stdev= 0.00, samples=1 00:16:02.135 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:02.135 lat (usec) : 250=26.45%, 500=68.11%, 750=1.50% 00:16:02.135 lat (msec) : 50=3.94% 00:16:02.135 cpu : usr=0.68%, sys=0.87%, ctx=535, majf=0, minf=2 00:16:02.135 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:02.135 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.135 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.135 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.135 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:02.135 job2: (groupid=0, jobs=1): err= 0: pid=2119408: Wed Jul 10 15:39:41 2024 00:16:02.135 read: IOPS=1370, BW=5483KiB/s (5614kB/s)(5488KiB/1001msec) 00:16:02.135 slat (nsec): min=5534, max=53521, avg=14063.05, stdev=6342.20 00:16:02.135 clat (usec): min=281, max=591, avg=363.48, stdev=65.86 00:16:02.136 lat (usec): min=288, max=612, avg=377.55, stdev=69.37 00:16:02.136 clat percentiles (usec): 00:16:02.136 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 310], 20.00th=[ 322], 00:16:02.136 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 351], 00:16:02.136 | 70.00th=[ 359], 80.00th=[ 375], 90.00th=[ 486], 95.00th=[ 510], 00:16:02.136 | 99.00th=[ 578], 99.50th=[ 586], 99.90th=[ 594], 99.95th=[ 594], 00:16:02.136 | 99.99th=[ 594] 00:16:02.136 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:16:02.136 slat (nsec): min=8298, max=69803, avg=23636.33, stdev=8800.39 00:16:02.136 clat (usec): min=195, max=626, avg=278.95, stdev=81.03 00:16:02.136 lat (usec): min=206, max=667, avg=302.59, stdev=84.52 00:16:02.136 clat percentiles (usec): 00:16:02.136 | 1.00th=[ 202], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:16:02.136 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 243], 60.00th=[ 255], 00:16:02.136 | 70.00th=[ 269], 80.00th=[ 318], 90.00th=[ 424], 95.00th=[ 465], 00:16:02.136 | 99.00th=[ 537], 99.50th=[ 570], 99.90th=[ 594], 99.95th=[ 627], 00:16:02.136 | 99.99th=[ 627] 00:16:02.136 bw ( KiB/s): min= 7872, max= 7872, per=53.81%, avg=7872.00, stdev= 0.00, samples=1 00:16:02.136 iops : min= 1968, max= 1968, avg=1968.00, stdev= 0.00, samples=1 00:16:02.136 lat (usec) : 250=30.26%, 500=65.13%, 750=4.61% 00:16:02.136 cpu : usr=4.30%, sys=7.00%, ctx=2909, majf=0, minf=1 00:16:02.136 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:02.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.136 issued rwts: total=1372,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.136 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:02.136 job3: (groupid=0, jobs=1): err= 0: pid=2119409: Wed Jul 10 15:39:41 2024 00:16:02.136 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:16:02.136 slat (nsec): min=5302, max=64726, avg=21454.91, stdev=9283.75 00:16:02.136 clat (usec): min=314, max=1417, avg=530.95, stdev=95.60 00:16:02.136 lat (usec): min=323, max=1437, avg=552.40, stdev=97.15 00:16:02.136 clat percentiles (usec): 00:16:02.136 | 1.00th=[ 334], 5.00th=[ 355], 10.00th=[ 396], 20.00th=[ 457], 00:16:02.136 | 30.00th=[ 486], 40.00th=[ 506], 50.00th=[ 537], 60.00th=[ 553], 00:16:02.136 | 70.00th=[ 586], 80.00th=[ 611], 90.00th=[ 644], 95.00th=[ 668], 00:16:02.136 | 99.00th=[ 725], 99.50th=[ 734], 99.90th=[ 799], 99.95th=[ 1418], 00:16:02.136 | 99.99th=[ 1418] 00:16:02.136 write: IOPS=1205, BW=4823KiB/s (4939kB/s)(4828KiB/1001msec); 0 zone resets 00:16:02.136 slat (nsec): min=6394, max=75597, avg=24141.55, stdev=11067.51 00:16:02.136 clat (usec): min=193, max=1169, avg=323.99, stdev=76.67 00:16:02.136 lat (usec): min=207, max=1185, avg=348.13, stdev=77.79 00:16:02.136 clat percentiles (usec): 00:16:02.136 | 1.00th=[ 215], 5.00th=[ 229], 10.00th=[ 239], 20.00th=[ 253], 00:16:02.136 | 30.00th=[ 277], 40.00th=[ 302], 50.00th=[ 318], 60.00th=[ 334], 00:16:02.136 | 70.00th=[ 359], 80.00th=[ 383], 90.00th=[ 416], 95.00th=[ 445], 00:16:02.136 | 99.00th=[ 523], 99.50th=[ 537], 99.90th=[ 930], 99.95th=[ 1172], 00:16:02.136 | 99.99th=[ 1172] 00:16:02.136 bw ( KiB/s): min= 4368, max= 4368, per=29.86%, avg=4368.00, stdev= 0.00, samples=1 00:16:02.136 iops : min= 1092, max= 1092, avg=1092.00, stdev= 0.00, samples=1 00:16:02.136 lat (usec) : 250=9.68%, 500=60.29%, 750=29.72%, 1000=0.22% 00:16:02.136 lat (msec) : 2=0.09% 00:16:02.136 cpu : usr=4.60%, sys=5.10%, ctx=2231, majf=0, minf=1 00:16:02.136 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:02.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:02.136 issued rwts: total=1024,1207,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:02.136 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:02.136 00:16:02.136 Run status group 0 (all jobs): 00:16:02.136 READ: bw=9464KiB/s (9691kB/s), 78.4KiB/s-5483KiB/s (80.3kB/s-5614kB/s), io=9748KiB (9982kB), run=1001-1030msec 00:16:02.136 WRITE: bw=14.3MiB/s (15.0MB/s), 1988KiB/s-6138KiB/s (2036kB/s-6285kB/s), io=14.7MiB (15.4MB), run=1001-1030msec 00:16:02.136 00:16:02.136 Disk stats (read/write): 00:16:02.136 nvme0n1: ios=57/512, merge=0/0, ticks=869/145, in_queue=1014, util=86.67% 00:16:02.136 nvme0n2: ios=58/512, merge=0/0, ticks=739/147, in_queue=886, util=91.14% 00:16:02.136 nvme0n3: ios=1086/1433, merge=0/0, ticks=1271/382, in_queue=1653, util=95.60% 00:16:02.136 nvme0n4: ios=896/1024, merge=0/0, ticks=544/305, in_queue=849, util=96.41% 00:16:02.136 15:39:41 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:02.136 [global] 00:16:02.136 thread=1 00:16:02.136 invalidate=1 00:16:02.136 rw=randwrite 00:16:02.136 time_based=1 00:16:02.136 runtime=1 00:16:02.136 ioengine=libaio 00:16:02.136 direct=1 00:16:02.136 bs=4096 00:16:02.136 iodepth=1 00:16:02.136 norandommap=0 00:16:02.136 numjobs=1 00:16:02.136 00:16:02.136 verify_dump=1 00:16:02.136 verify_backlog=512 00:16:02.136 verify_state_save=0 00:16:02.136 do_verify=1 00:16:02.136 verify=crc32c-intel 00:16:02.136 [job0] 00:16:02.136 filename=/dev/nvme0n1 00:16:02.136 [job1] 00:16:02.136 filename=/dev/nvme0n2 00:16:02.136 [job2] 00:16:02.136 filename=/dev/nvme0n3 00:16:02.136 [job3] 00:16:02.136 filename=/dev/nvme0n4 00:16:02.136 Could not set queue depth (nvme0n1) 00:16:02.136 Could not set queue depth (nvme0n2) 00:16:02.136 Could not set queue depth (nvme0n3) 00:16:02.136 Could not set queue depth (nvme0n4) 00:16:02.136 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.136 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.136 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.136 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:02.136 fio-3.35 00:16:02.136 Starting 4 threads 00:16:03.510 00:16:03.510 job0: (groupid=0, jobs=1): err= 0: pid=2119643: Wed Jul 10 15:39:42 2024 00:16:03.510 read: IOPS=456, BW=1825KiB/s (1869kB/s)(1860KiB/1019msec) 00:16:03.510 slat (nsec): min=6200, max=58416, avg=18535.78, stdev=6056.59 00:16:03.510 clat (usec): min=295, max=41978, avg=1892.22, stdev=7677.60 00:16:03.510 lat (usec): min=312, max=42012, avg=1910.75, stdev=7678.22 00:16:03.510 clat percentiles (usec): 00:16:03.510 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 318], 20.00th=[ 326], 00:16:03.510 | 30.00th=[ 330], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 367], 00:16:03.510 | 70.00th=[ 388], 80.00th=[ 469], 90.00th=[ 523], 95.00th=[ 619], 00:16:03.510 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:16:03.510 | 99.99th=[42206] 00:16:03.510 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:16:03.510 slat (nsec): min=7366, max=37022, avg=10358.46, stdev=4606.53 00:16:03.510 clat (usec): min=182, max=399, avg=234.26, stdev=36.03 00:16:03.510 lat (usec): min=191, max=407, avg=244.61, stdev=36.67 00:16:03.510 clat percentiles (usec): 00:16:03.510 | 1.00th=[ 190], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 206], 00:16:03.510 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 227], 60.00th=[ 233], 00:16:03.510 | 70.00th=[ 245], 80.00th=[ 255], 90.00th=[ 277], 95.00th=[ 310], 00:16:03.510 | 99.00th=[ 355], 99.50th=[ 392], 99.90th=[ 400], 99.95th=[ 400], 00:16:03.510 | 99.99th=[ 400] 00:16:03.510 bw ( KiB/s): min= 4096, max= 4096, per=40.84%, avg=4096.00, stdev= 0.00, samples=1 00:16:03.510 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:03.510 lat (usec) : 250=39.82%, 500=54.45%, 750=3.79% 00:16:03.510 lat (msec) : 2=0.10%, 10=0.10%, 50=1.74% 00:16:03.510 cpu : usr=0.49%, sys=2.46%, ctx=978, majf=0, minf=2 00:16:03.510 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.510 issued rwts: total=465,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.510 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.510 job1: (groupid=0, jobs=1): err= 0: pid=2119644: Wed Jul 10 15:39:42 2024 00:16:03.510 read: IOPS=17, BW=71.4KiB/s (73.1kB/s)(72.0KiB/1009msec) 00:16:03.510 slat (nsec): min=12369, max=32863, avg=26409.44, stdev=7955.85 00:16:03.510 clat (usec): min=40841, max=42729, avg=41566.54, stdev=573.66 00:16:03.510 lat (usec): min=40859, max=42762, avg=41592.95, stdev=575.65 00:16:03.510 clat percentiles (usec): 00:16:03.510 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:16:03.510 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:16:03.510 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:16:03.510 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:16:03.510 | 99.99th=[42730] 00:16:03.510 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:16:03.510 slat (nsec): min=6195, max=44508, avg=13756.80, stdev=5355.24 00:16:03.510 clat (usec): min=183, max=1341, avg=489.44, stdev=240.31 00:16:03.510 lat (usec): min=190, max=1355, avg=503.19, stdev=241.50 00:16:03.510 clat percentiles (usec): 00:16:03.510 | 1.00th=[ 200], 5.00th=[ 223], 10.00th=[ 233], 20.00th=[ 260], 00:16:03.510 | 30.00th=[ 330], 40.00th=[ 388], 50.00th=[ 416], 60.00th=[ 453], 00:16:03.510 | 70.00th=[ 578], 80.00th=[ 758], 90.00th=[ 857], 95.00th=[ 914], 00:16:03.510 | 99.00th=[ 1106], 99.50th=[ 1156], 99.90th=[ 1336], 99.95th=[ 1336], 00:16:03.510 | 99.99th=[ 1336] 00:16:03.510 bw ( KiB/s): min= 4096, max= 4096, per=40.84%, avg=4096.00, stdev= 0.00, samples=1 00:16:03.510 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:03.510 lat (usec) : 250=17.92%, 500=45.28%, 750=13.02%, 1000=17.74% 00:16:03.510 lat (msec) : 2=2.64%, 50=3.40% 00:16:03.510 cpu : usr=0.30%, sys=0.79%, ctx=530, majf=0, minf=1 00:16:03.510 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.510 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.510 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.510 job2: (groupid=0, jobs=1): err= 0: pid=2119645: Wed Jul 10 15:39:42 2024 00:16:03.510 read: IOPS=502, BW=2010KiB/s (2058kB/s)(2052KiB/1021msec) 00:16:03.510 slat (nsec): min=6028, max=63944, avg=19412.74, stdev=8741.48 00:16:03.510 clat (usec): min=281, max=42010, avg=1120.66, stdev=5402.90 00:16:03.510 lat (usec): min=290, max=42029, avg=1140.07, stdev=5402.79 00:16:03.510 clat percentiles (usec): 00:16:03.510 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 297], 20.00th=[ 306], 00:16:03.510 | 30.00th=[ 314], 40.00th=[ 326], 50.00th=[ 359], 60.00th=[ 371], 00:16:03.510 | 70.00th=[ 441], 80.00th=[ 461], 90.00th=[ 482], 95.00th=[ 510], 00:16:03.510 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:16:03.510 | 99.99th=[42206] 00:16:03.510 write: IOPS=1002, BW=4012KiB/s (4108kB/s)(4096KiB/1021msec); 0 zone resets 00:16:03.510 slat (nsec): min=6186, max=62230, avg=22007.86, stdev=10790.13 00:16:03.510 clat (usec): min=231, max=1876, avg=392.99, stdev=187.54 00:16:03.510 lat (usec): min=254, max=1893, avg=414.99, stdev=185.11 00:16:03.510 clat percentiles (usec): 00:16:03.510 | 1.00th=[ 253], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 273], 00:16:03.510 | 30.00th=[ 285], 40.00th=[ 302], 50.00th=[ 326], 60.00th=[ 343], 00:16:03.510 | 70.00th=[ 379], 80.00th=[ 412], 90.00th=[ 766], 95.00th=[ 840], 00:16:03.510 | 99.00th=[ 914], 99.50th=[ 955], 99.90th=[ 996], 99.95th=[ 1876], 00:16:03.510 | 99.99th=[ 1876] 00:16:03.510 bw ( KiB/s): min= 4096, max= 4096, per=40.84%, avg=4096.00, stdev= 0.00, samples=2 00:16:03.510 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:16:03.510 lat (usec) : 250=0.39%, 500=87.25%, 750=4.10%, 1000=7.55% 00:16:03.510 lat (msec) : 2=0.07%, 20=0.07%, 50=0.59% 00:16:03.510 cpu : usr=1.67%, sys=3.43%, ctx=1539, majf=0, minf=1 00:16:03.510 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.510 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.510 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.510 issued rwts: total=513,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.510 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.510 job3: (groupid=0, jobs=1): err= 0: pid=2119646: Wed Jul 10 15:39:42 2024 00:16:03.510 read: IOPS=167, BW=669KiB/s (685kB/s)(676KiB/1010msec) 00:16:03.510 slat (nsec): min=8457, max=52528, avg=19519.56, stdev=6253.47 00:16:03.510 clat (usec): min=336, max=41524, avg=4942.88, stdev=12669.80 00:16:03.510 lat (usec): min=345, max=41544, avg=4962.40, stdev=12671.47 00:16:03.510 clat percentiles (usec): 00:16:03.510 | 1.00th=[ 338], 5.00th=[ 404], 10.00th=[ 433], 20.00th=[ 445], 00:16:03.510 | 30.00th=[ 453], 40.00th=[ 457], 50.00th=[ 465], 60.00th=[ 474], 00:16:03.510 | 70.00th=[ 482], 80.00th=[ 494], 90.00th=[40633], 95.00th=[41157], 00:16:03.510 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:16:03.510 | 99.99th=[41681] 00:16:03.510 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:16:03.510 slat (nsec): min=7275, max=31033, avg=12382.57, stdev=5241.64 00:16:03.510 clat (usec): min=217, max=500, avg=314.96, stdev=45.41 00:16:03.510 lat (usec): min=226, max=510, avg=327.34, stdev=45.92 00:16:03.510 clat percentiles (usec): 00:16:03.510 | 1.00th=[ 265], 5.00th=[ 269], 10.00th=[ 273], 20.00th=[ 281], 00:16:03.510 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 310], 00:16:03.510 | 70.00th=[ 322], 80.00th=[ 343], 90.00th=[ 396], 95.00th=[ 408], 00:16:03.510 | 99.00th=[ 457], 99.50th=[ 469], 99.90th=[ 502], 99.95th=[ 502], 00:16:03.510 | 99.99th=[ 502] 00:16:03.510 bw ( KiB/s): min= 4096, max= 4096, per=40.84%, avg=4096.00, stdev= 0.00, samples=1 00:16:03.510 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:16:03.511 lat (usec) : 250=0.15%, 500=95.30%, 750=1.76% 00:16:03.511 lat (msec) : 50=2.79% 00:16:03.511 cpu : usr=0.89%, sys=0.99%, ctx=681, majf=0, minf=1 00:16:03.511 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:03.511 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.511 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:03.511 issued rwts: total=169,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:03.511 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:03.511 00:16:03.511 Run status group 0 (all jobs): 00:16:03.511 READ: bw=4564KiB/s (4674kB/s), 71.4KiB/s-2010KiB/s (73.1kB/s-2058kB/s), io=4660KiB (4772kB), run=1009-1021msec 00:16:03.511 WRITE: bw=9.79MiB/s (10.3MB/s), 2010KiB/s-4012KiB/s (2058kB/s-4108kB/s), io=10.0MiB (10.5MB), run=1009-1021msec 00:16:03.511 00:16:03.511 Disk stats (read/write): 00:16:03.511 nvme0n1: ios=484/512, merge=0/0, ticks=1658/112, in_queue=1770, util=99.30% 00:16:03.511 nvme0n2: ios=63/512, merge=0/0, ticks=612/244, in_queue=856, util=88.32% 00:16:03.511 nvme0n3: ios=541/649, merge=0/0, ticks=1446/283, in_queue=1729, util=97.50% 00:16:03.511 nvme0n4: ios=200/512, merge=0/0, ticks=766/151, in_queue=917, util=95.80% 00:16:03.511 15:39:42 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:03.511 [global] 00:16:03.511 thread=1 00:16:03.511 invalidate=1 00:16:03.511 rw=write 00:16:03.511 time_based=1 00:16:03.511 runtime=1 00:16:03.511 ioengine=libaio 00:16:03.511 direct=1 00:16:03.511 bs=4096 00:16:03.511 iodepth=128 00:16:03.511 norandommap=0 00:16:03.511 numjobs=1 00:16:03.511 00:16:03.511 verify_dump=1 00:16:03.511 verify_backlog=512 00:16:03.511 verify_state_save=0 00:16:03.511 do_verify=1 00:16:03.511 verify=crc32c-intel 00:16:03.511 [job0] 00:16:03.511 filename=/dev/nvme0n1 00:16:03.511 [job1] 00:16:03.511 filename=/dev/nvme0n2 00:16:03.511 [job2] 00:16:03.511 filename=/dev/nvme0n3 00:16:03.511 [job3] 00:16:03.511 filename=/dev/nvme0n4 00:16:03.511 Could not set queue depth (nvme0n1) 00:16:03.511 Could not set queue depth (nvme0n2) 00:16:03.511 Could not set queue depth (nvme0n3) 00:16:03.511 Could not set queue depth (nvme0n4) 00:16:03.511 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:03.511 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:03.511 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:03.511 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:03.511 fio-3.35 00:16:03.511 Starting 4 threads 00:16:04.902 00:16:04.902 job0: (groupid=0, jobs=1): err= 0: pid=2119881: Wed Jul 10 15:39:43 2024 00:16:04.902 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:16:04.902 slat (usec): min=2, max=10767, avg=113.40, stdev=638.82 00:16:04.902 clat (usec): min=7501, max=40329, avg=14621.35, stdev=4837.05 00:16:04.902 lat (usec): min=8170, max=40349, avg=14734.76, stdev=4860.56 00:16:04.902 clat percentiles (usec): 00:16:04.902 | 1.00th=[ 9241], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10814], 00:16:04.902 | 30.00th=[11863], 40.00th=[12256], 50.00th=[12911], 60.00th=[14222], 00:16:04.902 | 70.00th=[15926], 80.00th=[17433], 90.00th=[20317], 95.00th=[23987], 00:16:04.902 | 99.00th=[32375], 99.50th=[35390], 99.90th=[40109], 99.95th=[40109], 00:16:04.902 | 99.99th=[40109] 00:16:04.902 write: IOPS=4288, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1004msec); 0 zone resets 00:16:04.902 slat (usec): min=3, max=8539, avg=116.47, stdev=526.20 00:16:04.902 clat (usec): min=3272, max=44678, avg=15487.00, stdev=5377.05 00:16:04.902 lat (usec): min=4670, max=44694, avg=15603.47, stdev=5406.19 00:16:04.902 clat percentiles (usec): 00:16:04.902 | 1.00th=[ 9503], 5.00th=[10945], 10.00th=[11469], 20.00th=[11994], 00:16:04.902 | 30.00th=[12518], 40.00th=[13173], 50.00th=[14222], 60.00th=[15008], 00:16:04.902 | 70.00th=[15926], 80.00th=[17433], 90.00th=[20317], 95.00th=[25560], 00:16:04.902 | 99.00th=[40633], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:16:04.902 | 99.99th=[44827] 00:16:04.902 bw ( KiB/s): min=16351, max=17048, per=26.17%, avg=16699.50, stdev=492.85, samples=2 00:16:04.902 iops : min= 4087, max= 4262, avg=4174.50, stdev=123.74, samples=2 00:16:04.902 lat (msec) : 4=0.01%, 10=4.87%, 20=82.72%, 50=12.40% 00:16:04.902 cpu : usr=3.99%, sys=7.08%, ctx=535, majf=0, minf=1 00:16:04.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:16:04.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.902 issued rwts: total=4096,4306,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.902 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.902 job1: (groupid=0, jobs=1): err= 0: pid=2119882: Wed Jul 10 15:39:43 2024 00:16:04.902 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:16:04.902 slat (usec): min=2, max=28627, avg=111.47, stdev=766.35 00:16:04.902 clat (usec): min=7271, max=77666, avg=14999.76, stdev=7089.75 00:16:04.902 lat (usec): min=7274, max=93881, avg=15111.23, stdev=7150.01 00:16:04.902 clat percentiles (usec): 00:16:04.902 | 1.00th=[ 7570], 5.00th=[10552], 10.00th=[11600], 20.00th=[11994], 00:16:04.902 | 30.00th=[12387], 40.00th=[12518], 50.00th=[13042], 60.00th=[13304], 00:16:04.902 | 70.00th=[13960], 80.00th=[15533], 90.00th=[20055], 95.00th=[28181], 00:16:04.902 | 99.00th=[46924], 99.50th=[51643], 99.90th=[78119], 99.95th=[78119], 00:16:04.902 | 99.99th=[78119] 00:16:04.902 write: IOPS=4120, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1009msec); 0 zone resets 00:16:04.902 slat (usec): min=3, max=23997, avg=123.08, stdev=851.06 00:16:04.902 clat (usec): min=6569, max=56521, avg=15855.28, stdev=6261.99 00:16:04.902 lat (usec): min=8465, max=56690, avg=15978.37, stdev=6331.87 00:16:04.902 clat percentiles (usec): 00:16:04.902 | 1.00th=[ 9110], 5.00th=[11076], 10.00th=[12125], 20.00th=[12911], 00:16:04.902 | 30.00th=[13435], 40.00th=[13698], 50.00th=[14091], 60.00th=[14484], 00:16:04.902 | 70.00th=[15008], 80.00th=[16188], 90.00th=[23987], 95.00th=[32113], 00:16:04.902 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[47973], 00:16:04.902 | 99.99th=[56361] 00:16:04.902 bw ( KiB/s): min=13776, max=18954, per=25.65%, avg=16365.00, stdev=3661.40, samples=2 00:16:04.902 iops : min= 3444, max= 4738, avg=4091.00, stdev=915.00, samples=2 00:16:04.902 lat (msec) : 10=2.77%, 20=87.07%, 50=9.86%, 100=0.29% 00:16:04.902 cpu : usr=4.27%, sys=6.35%, ctx=400, majf=0, minf=1 00:16:04.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:16:04.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.902 issued rwts: total=4096,4158,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.902 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.902 job2: (groupid=0, jobs=1): err= 0: pid=2119883: Wed Jul 10 15:39:43 2024 00:16:04.902 read: IOPS=5052, BW=19.7MiB/s (20.7MB/s)(19.8MiB/1003msec) 00:16:04.902 slat (usec): min=3, max=6194, avg=100.39, stdev=474.13 00:16:04.902 clat (usec): min=1882, max=22395, avg=12856.65, stdev=1744.08 00:16:04.902 lat (usec): min=3572, max=22447, avg=12957.04, stdev=1745.44 00:16:04.902 clat percentiles (usec): 00:16:04.902 | 1.00th=[ 8717], 5.00th=[10290], 10.00th=[10814], 20.00th=[11600], 00:16:04.902 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12780], 60.00th=[13173], 00:16:04.902 | 70.00th=[13566], 80.00th=[13960], 90.00th=[15139], 95.00th=[15926], 00:16:04.902 | 99.00th=[17695], 99.50th=[18482], 99.90th=[20579], 99.95th=[20579], 00:16:04.902 | 99.99th=[22414] 00:16:04.902 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:16:04.902 slat (usec): min=4, max=5848, avg=87.30, stdev=448.50 00:16:04.902 clat (usec): min=7747, max=23514, avg=12019.76, stdev=1756.17 00:16:04.902 lat (usec): min=7755, max=23548, avg=12107.05, stdev=1757.92 00:16:04.902 clat percentiles (usec): 00:16:04.902 | 1.00th=[ 8717], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10552], 00:16:04.902 | 30.00th=[11076], 40.00th=[11600], 50.00th=[11994], 60.00th=[12256], 00:16:04.902 | 70.00th=[12649], 80.00th=[13173], 90.00th=[14091], 95.00th=[15008], 00:16:04.902 | 99.00th=[18482], 99.50th=[18482], 99.90th=[21365], 99.95th=[21365], 00:16:04.902 | 99.99th=[23462] 00:16:04.902 bw ( KiB/s): min=20480, max=20480, per=32.10%, avg=20480.00, stdev= 0.00, samples=2 00:16:04.902 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:16:04.902 lat (msec) : 2=0.01%, 4=0.10%, 10=7.65%, 20=92.01%, 50=0.24% 00:16:04.902 cpu : usr=5.39%, sys=9.78%, ctx=472, majf=0, minf=1 00:16:04.902 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:04.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.903 issued rwts: total=5068,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.903 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.903 job3: (groupid=0, jobs=1): err= 0: pid=2119884: Wed Jul 10 15:39:43 2024 00:16:04.903 read: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec) 00:16:04.903 slat (usec): min=3, max=22798, avg=217.90, stdev=1344.25 00:16:04.903 clat (usec): min=9537, max=88238, avg=27451.61, stdev=15691.56 00:16:04.903 lat (usec): min=9546, max=88254, avg=27669.52, stdev=15806.61 00:16:04.903 clat percentiles (usec): 00:16:04.903 | 1.00th=[10814], 5.00th=[13960], 10.00th=[15139], 20.00th=[15795], 00:16:04.903 | 30.00th=[17695], 40.00th=[19268], 50.00th=[21365], 60.00th=[25297], 00:16:04.903 | 70.00th=[28705], 80.00th=[33817], 90.00th=[53740], 95.00th=[64750], 00:16:04.903 | 99.00th=[78119], 99.50th=[79168], 99.90th=[79168], 99.95th=[88605], 00:16:04.903 | 99.99th=[88605] 00:16:04.903 write: IOPS=2488, BW=9954KiB/s (10.2MB/s)(9.81MiB/1009msec); 0 zone resets 00:16:04.903 slat (usec): min=4, max=23978, avg=212.69, stdev=1052.03 00:16:04.903 clat (usec): min=7108, max=69710, avg=28255.39, stdev=14695.51 00:16:04.903 lat (usec): min=7461, max=69730, avg=28468.09, stdev=14784.38 00:16:04.903 clat percentiles (usec): 00:16:04.903 | 1.00th=[ 9765], 5.00th=[12518], 10.00th=[15139], 20.00th=[16581], 00:16:04.903 | 30.00th=[17433], 40.00th=[19530], 50.00th=[23462], 60.00th=[26870], 00:16:04.903 | 70.00th=[30540], 80.00th=[40633], 90.00th=[52167], 95.00th=[60031], 00:16:04.903 | 99.00th=[68682], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:16:04.903 | 99.99th=[69731] 00:16:04.903 bw ( KiB/s): min= 6784, max=12263, per=14.92%, avg=9523.50, stdev=3874.24, samples=2 00:16:04.903 iops : min= 1696, max= 3065, avg=2380.50, stdev=968.03, samples=2 00:16:04.903 lat (msec) : 10=0.77%, 20=40.32%, 50=45.84%, 100=13.07% 00:16:04.903 cpu : usr=2.78%, sys=4.56%, ctx=295, majf=0, minf=1 00:16:04.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:16:04.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:04.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:04.903 issued rwts: total=2048,2511,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:04.903 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:04.903 00:16:04.903 Run status group 0 (all jobs): 00:16:04.903 READ: bw=59.3MiB/s (62.1MB/s), 8119KiB/s-19.7MiB/s (8314kB/s-20.7MB/s), io=59.8MiB (62.7MB), run=1003-1009msec 00:16:04.903 WRITE: bw=62.3MiB/s (65.3MB/s), 9954KiB/s-19.9MiB/s (10.2MB/s-20.9MB/s), io=62.9MiB (65.9MB), run=1003-1009msec 00:16:04.903 00:16:04.903 Disk stats (read/write): 00:16:04.903 nvme0n1: ios=3634/3743, merge=0/0, ticks=15895/15534, in_queue=31429, util=86.57% 00:16:04.903 nvme0n2: ios=3301/3584, merge=0/0, ticks=20597/20074, in_queue=40671, util=98.68% 00:16:04.903 nvme0n3: ios=4260/4608, merge=0/0, ticks=17791/16266, in_queue=34057, util=99.06% 00:16:04.903 nvme0n4: ios=2103/2111, merge=0/0, ticks=28863/20221, in_queue=49084, util=99.16% 00:16:04.903 15:39:43 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:04.903 [global] 00:16:04.903 thread=1 00:16:04.903 invalidate=1 00:16:04.903 rw=randwrite 00:16:04.903 time_based=1 00:16:04.903 runtime=1 00:16:04.903 ioengine=libaio 00:16:04.903 direct=1 00:16:04.903 bs=4096 00:16:04.903 iodepth=128 00:16:04.903 norandommap=0 00:16:04.903 numjobs=1 00:16:04.903 00:16:04.903 verify_dump=1 00:16:04.903 verify_backlog=512 00:16:04.903 verify_state_save=0 00:16:04.903 do_verify=1 00:16:04.903 verify=crc32c-intel 00:16:04.903 [job0] 00:16:04.903 filename=/dev/nvme0n1 00:16:04.903 [job1] 00:16:04.903 filename=/dev/nvme0n2 00:16:04.903 [job2] 00:16:04.903 filename=/dev/nvme0n3 00:16:04.903 [job3] 00:16:04.903 filename=/dev/nvme0n4 00:16:04.903 Could not set queue depth (nvme0n1) 00:16:04.903 Could not set queue depth (nvme0n2) 00:16:04.903 Could not set queue depth (nvme0n3) 00:16:04.903 Could not set queue depth (nvme0n4) 00:16:04.903 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:04.903 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:04.903 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:04.903 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:04.903 fio-3.35 00:16:04.903 Starting 4 threads 00:16:06.276 00:16:06.276 job0: (groupid=0, jobs=1): err= 0: pid=2120128: Wed Jul 10 15:39:45 2024 00:16:06.276 read: IOPS=4716, BW=18.4MiB/s (19.3MB/s)(18.5MiB/1004msec) 00:16:06.276 slat (usec): min=2, max=10554, avg=99.72, stdev=665.86 00:16:06.276 clat (usec): min=2683, max=37624, avg=13008.00, stdev=4558.37 00:16:06.276 lat (usec): min=3189, max=37628, avg=13107.71, stdev=4597.08 00:16:06.276 clat percentiles (usec): 00:16:06.276 | 1.00th=[ 4359], 5.00th=[ 7504], 10.00th=[ 8455], 20.00th=[10290], 00:16:06.276 | 30.00th=[11076], 40.00th=[11600], 50.00th=[11994], 60.00th=[12518], 00:16:06.276 | 70.00th=[13435], 80.00th=[16057], 90.00th=[18220], 95.00th=[20055], 00:16:06.276 | 99.00th=[34866], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:16:06.276 | 99.99th=[37487] 00:16:06.276 write: IOPS=5099, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1004msec); 0 zone resets 00:16:06.276 slat (usec): min=3, max=16030, avg=84.45, stdev=584.21 00:16:06.276 clat (usec): min=514, max=61901, avg=12871.47, stdev=8584.22 00:16:06.276 lat (usec): min=532, max=61907, avg=12955.92, stdev=8621.09 00:16:06.276 clat percentiles (usec): 00:16:06.276 | 1.00th=[ 1172], 5.00th=[ 3458], 10.00th=[ 4752], 20.00th=[ 7242], 00:16:06.276 | 30.00th=[ 9503], 40.00th=[10945], 50.00th=[11994], 60.00th=[12649], 00:16:06.276 | 70.00th=[13435], 80.00th=[15008], 90.00th=[19268], 95.00th=[28705], 00:16:06.276 | 99.00th=[53216], 99.50th=[58983], 99.90th=[62129], 99.95th=[62129], 00:16:06.276 | 99.99th=[62129] 00:16:06.276 bw ( KiB/s): min=18744, max=22096, per=30.24%, avg=20420.00, stdev=2370.22, samples=2 00:16:06.276 iops : min= 4686, max= 5524, avg=5105.00, stdev=592.56, samples=2 00:16:06.276 lat (usec) : 750=0.03%, 1000=0.06% 00:16:06.276 lat (msec) : 2=0.97%, 4=2.43%, 10=20.55%, 20=69.08%, 50=6.24% 00:16:06.276 lat (msec) : 100=0.64% 00:16:06.276 cpu : usr=4.39%, sys=6.48%, ctx=462, majf=0, minf=1 00:16:06.276 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:06.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.277 issued rwts: total=4735,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.277 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.277 job1: (groupid=0, jobs=1): err= 0: pid=2120129: Wed Jul 10 15:39:45 2024 00:16:06.277 read: IOPS=5279, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1006msec) 00:16:06.277 slat (usec): min=2, max=12021, avg=81.11, stdev=630.63 00:16:06.277 clat (usec): min=1722, max=32156, avg=11218.67, stdev=3637.02 00:16:06.277 lat (usec): min=3408, max=32162, avg=11299.77, stdev=3669.03 00:16:06.277 clat percentiles (usec): 00:16:06.277 | 1.00th=[ 4621], 5.00th=[ 5800], 10.00th=[ 7570], 20.00th=[ 9241], 00:16:06.277 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10683], 60.00th=[11207], 00:16:06.277 | 70.00th=[12256], 80.00th=[12911], 90.00th=[14746], 95.00th=[17433], 00:16:06.277 | 99.00th=[29754], 99.50th=[30016], 99.90th=[30016], 99.95th=[32113], 00:16:06.277 | 99.99th=[32113] 00:16:06.277 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:16:06.277 slat (usec): min=3, max=18219, avg=69.01, stdev=617.56 00:16:06.277 clat (usec): min=1065, max=62434, avg=12082.10, stdev=7736.07 00:16:06.277 lat (usec): min=1870, max=62441, avg=12151.12, stdev=7750.49 00:16:06.277 clat percentiles (usec): 00:16:06.277 | 1.00th=[ 2835], 5.00th=[ 5342], 10.00th=[ 6128], 20.00th=[ 8291], 00:16:06.277 | 30.00th=[ 9503], 40.00th=[10290], 50.00th=[10945], 60.00th=[11207], 00:16:06.277 | 70.00th=[11863], 80.00th=[13042], 90.00th=[17171], 95.00th=[25560], 00:16:06.277 | 99.00th=[52691], 99.50th=[58983], 99.90th=[62129], 99.95th=[62129], 00:16:06.277 | 99.99th=[62653] 00:16:06.277 bw ( KiB/s): min=20480, max=24576, per=33.36%, avg=22528.00, stdev=2896.31, samples=2 00:16:06.277 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:16:06.277 lat (msec) : 2=0.05%, 4=1.29%, 10=34.68%, 20=59.62%, 50=3.82% 00:16:06.277 lat (msec) : 100=0.54% 00:16:06.277 cpu : usr=3.48%, sys=5.57%, ctx=344, majf=0, minf=1 00:16:06.277 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:06.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.277 issued rwts: total=5311,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.277 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.277 job2: (groupid=0, jobs=1): err= 0: pid=2120134: Wed Jul 10 15:39:45 2024 00:16:06.277 read: IOPS=2577, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1003msec) 00:16:06.277 slat (usec): min=2, max=44453, avg=203.41, stdev=1744.78 00:16:06.277 clat (usec): min=607, max=90823, avg=25464.25, stdev=16923.97 00:16:06.277 lat (usec): min=5206, max=90837, avg=25667.66, stdev=17060.48 00:16:06.277 clat percentiles (usec): 00:16:06.277 | 1.00th=[ 6587], 5.00th=[ 8455], 10.00th=[11600], 20.00th=[12387], 00:16:06.277 | 30.00th=[12780], 40.00th=[13435], 50.00th=[17695], 60.00th=[28443], 00:16:06.277 | 70.00th=[33162], 80.00th=[40109], 90.00th=[45351], 95.00th=[57410], 00:16:06.277 | 99.00th=[84411], 99.50th=[84411], 99.90th=[84411], 99.95th=[84411], 00:16:06.277 | 99.99th=[90702] 00:16:06.277 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:16:06.277 slat (usec): min=3, max=28006, avg=147.56, stdev=1236.20 00:16:06.277 clat (usec): min=5528, max=84575, avg=19469.09, stdev=12108.75 00:16:06.277 lat (usec): min=5540, max=84581, avg=19616.65, stdev=12217.00 00:16:06.277 clat percentiles (usec): 00:16:06.277 | 1.00th=[ 5997], 5.00th=[10945], 10.00th=[11863], 20.00th=[12911], 00:16:06.277 | 30.00th=[13566], 40.00th=[14091], 50.00th=[14353], 60.00th=[14746], 00:16:06.277 | 70.00th=[15926], 80.00th=[23987], 90.00th=[38536], 95.00th=[51643], 00:16:06.277 | 99.00th=[57934], 99.50th=[57934], 99.90th=[66323], 99.95th=[80217], 00:16:06.277 | 99.99th=[84411] 00:16:06.277 bw ( KiB/s): min= 8904, max=14848, per=17.59%, avg=11876.00, stdev=4203.04, samples=2 00:16:06.277 iops : min= 2226, max= 3712, avg=2969.00, stdev=1050.76, samples=2 00:16:06.277 lat (usec) : 750=0.02% 00:16:06.277 lat (msec) : 10=5.18%, 20=60.63%, 50=27.49%, 100=6.68% 00:16:06.277 cpu : usr=2.40%, sys=4.69%, ctx=210, majf=0, minf=1 00:16:06.277 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:16:06.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.277 issued rwts: total=2585,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.277 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.277 job3: (groupid=0, jobs=1): err= 0: pid=2120135: Wed Jul 10 15:39:45 2024 00:16:06.277 read: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec) 00:16:06.277 slat (usec): min=2, max=25189, avg=145.78, stdev=1080.78 00:16:06.277 clat (msec): min=3, max=104, avg=18.32, stdev= 8.30 00:16:06.277 lat (msec): min=3, max=104, avg=18.46, stdev= 8.34 00:16:06.277 clat percentiles (msec): 00:16:06.277 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 11], 20.00th=[ 12], 00:16:06.277 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 20], 60.00th=[ 22], 00:16:06.277 | 70.00th=[ 22], 80.00th=[ 23], 90.00th=[ 25], 95.00th=[ 27], 00:16:06.277 | 99.00th=[ 34], 99.50th=[ 34], 99.90th=[ 106], 99.95th=[ 106], 00:16:06.277 | 99.99th=[ 106] 00:16:06.277 write: IOPS=3147, BW=12.3MiB/s (12.9MB/s)(12.3MiB/1004msec); 0 zone resets 00:16:06.277 slat (usec): min=3, max=18244, avg=167.35, stdev=1005.38 00:16:06.277 clat (usec): min=2564, max=95651, avg=22515.64, stdev=12404.49 00:16:06.277 lat (usec): min=2800, max=95656, avg=22682.99, stdev=12467.59 00:16:06.277 clat percentiles (usec): 00:16:06.277 | 1.00th=[ 5997], 5.00th=[11994], 10.00th=[13960], 20.00th=[15008], 00:16:06.277 | 30.00th=[15664], 40.00th=[16581], 50.00th=[17695], 60.00th=[19530], 00:16:06.277 | 70.00th=[22938], 80.00th=[27395], 90.00th=[43254], 95.00th=[52691], 00:16:06.277 | 99.00th=[61604], 99.50th=[76022], 99.90th=[83362], 99.95th=[83362], 00:16:06.277 | 99.99th=[95945] 00:16:06.277 bw ( KiB/s): min=12272, max=12304, per=18.20%, avg=12288.00, stdev=22.63, samples=2 00:16:06.277 iops : min= 3068, max= 3076, avg=3072.00, stdev= 5.66, samples=2 00:16:06.277 lat (msec) : 4=0.50%, 10=5.52%, 20=53.42%, 50=37.16%, 100=3.19% 00:16:06.277 lat (msec) : 250=0.21% 00:16:06.277 cpu : usr=1.99%, sys=4.59%, ctx=296, majf=0, minf=1 00:16:06.277 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:06.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:06.277 issued rwts: total=3072,3160,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.277 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:06.277 00:16:06.277 Run status group 0 (all jobs): 00:16:06.277 READ: bw=61.0MiB/s (63.9MB/s), 10.1MiB/s-20.6MiB/s (10.6MB/s-21.6MB/s), io=61.3MiB (64.3MB), run=1003-1006msec 00:16:06.277 WRITE: bw=65.9MiB/s (69.2MB/s), 12.0MiB/s-21.9MiB/s (12.5MB/s-22.9MB/s), io=66.3MiB (69.6MB), run=1003-1006msec 00:16:06.277 00:16:06.277 Disk stats (read/write): 00:16:06.277 nvme0n1: ios=4143/4222, merge=0/0, ticks=47510/50170, in_queue=97680, util=87.78% 00:16:06.277 nvme0n2: ios=4657/4699, merge=0/0, ticks=37391/43584, in_queue=80975, util=87.92% 00:16:06.277 nvme0n3: ios=2048/2449, merge=0/0, ticks=27507/21769, in_queue=49276, util=88.83% 00:16:06.277 nvme0n4: ios=2508/2560, merge=0/0, ticks=35954/53226, in_queue=89180, util=91.04% 00:16:06.277 15:39:45 -- target/fio.sh@55 -- # sync 00:16:06.277 15:39:45 -- target/fio.sh@59 -- # fio_pid=2120284 00:16:06.277 15:39:45 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:06.277 15:39:45 -- target/fio.sh@61 -- # sleep 3 00:16:06.277 [global] 00:16:06.277 thread=1 00:16:06.277 invalidate=1 00:16:06.277 rw=read 00:16:06.277 time_based=1 00:16:06.277 runtime=10 00:16:06.277 ioengine=libaio 00:16:06.277 direct=1 00:16:06.277 bs=4096 00:16:06.277 iodepth=1 00:16:06.277 norandommap=1 00:16:06.277 numjobs=1 00:16:06.277 00:16:06.277 [job0] 00:16:06.277 filename=/dev/nvme0n1 00:16:06.277 [job1] 00:16:06.277 filename=/dev/nvme0n2 00:16:06.277 [job2] 00:16:06.277 filename=/dev/nvme0n3 00:16:06.277 [job3] 00:16:06.277 filename=/dev/nvme0n4 00:16:06.277 Could not set queue depth (nvme0n1) 00:16:06.277 Could not set queue depth (nvme0n2) 00:16:06.277 Could not set queue depth (nvme0n3) 00:16:06.277 Could not set queue depth (nvme0n4) 00:16:06.595 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.595 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.595 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.595 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.595 fio-3.35 00:16:06.595 Starting 4 threads 00:16:09.117 15:39:48 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:09.682 15:39:48 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:09.682 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=12734464, buflen=4096 00:16:09.682 fio: pid=2120486, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:09.682 15:39:49 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:09.682 15:39:49 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:09.682 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=32579584, buflen=4096 00:16:09.682 fio: pid=2120485, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:09.939 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=14983168, buflen=4096 00:16:09.939 fio: pid=2120483, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:09.939 15:39:49 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:09.939 15:39:49 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:10.197 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=36798464, buflen=4096 00:16:10.197 fio: pid=2120484, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:10.197 15:39:49 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:10.197 15:39:49 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:10.197 00:16:10.197 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2120483: Wed Jul 10 15:39:49 2024 00:16:10.197 read: IOPS=1072, BW=4290KiB/s (4393kB/s)(14.3MiB/3411msec) 00:16:10.197 slat (usec): min=5, max=28536, avg=25.98, stdev=516.55 00:16:10.197 clat (usec): min=315, max=42579, avg=899.94, stdev=3917.89 00:16:10.197 lat (usec): min=321, max=42614, avg=925.93, stdev=3951.23 00:16:10.197 clat percentiles (usec): 00:16:10.197 | 1.00th=[ 326], 5.00th=[ 338], 10.00th=[ 343], 20.00th=[ 367], 00:16:10.197 | 30.00th=[ 383], 40.00th=[ 457], 50.00th=[ 510], 60.00th=[ 562], 00:16:10.197 | 70.00th=[ 611], 80.00th=[ 676], 90.00th=[ 742], 95.00th=[ 799], 00:16:10.197 | 99.00th=[ 1418], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:16:10.197 | 99.99th=[42730] 00:16:10.197 bw ( KiB/s): min= 112, max= 6680, per=14.85%, avg=3836.00, stdev=3006.25, samples=6 00:16:10.197 iops : min= 28, max= 1670, avg=959.00, stdev=751.56, samples=6 00:16:10.197 lat (usec) : 500=48.59%, 750=42.09%, 1000=8.09% 00:16:10.197 lat (msec) : 2=0.25%, 4=0.03%, 50=0.93% 00:16:10.197 cpu : usr=1.14%, sys=2.32%, ctx=3661, majf=0, minf=1 00:16:10.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.197 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.197 issued rwts: total=3659,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.197 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2120484: Wed Jul 10 15:39:49 2024 00:16:10.197 read: IOPS=2448, BW=9792KiB/s (10.0MB/s)(35.1MiB/3670msec) 00:16:10.197 slat (usec): min=5, max=24971, avg=20.05, stdev=410.25 00:16:10.197 clat (usec): min=265, max=31224, avg=383.17, stdev=371.33 00:16:10.197 lat (usec): min=271, max=31230, avg=403.22, stdev=553.78 00:16:10.197 clat percentiles (usec): 00:16:10.197 | 1.00th=[ 285], 5.00th=[ 306], 10.00th=[ 338], 20.00th=[ 347], 00:16:10.197 | 30.00th=[ 351], 40.00th=[ 355], 50.00th=[ 363], 60.00th=[ 371], 00:16:10.197 | 70.00th=[ 379], 80.00th=[ 388], 90.00th=[ 453], 95.00th=[ 502], 00:16:10.197 | 99.00th=[ 594], 99.50th=[ 660], 99.90th=[ 1287], 99.95th=[ 4047], 00:16:10.197 | 99.99th=[31327] 00:16:10.197 bw ( KiB/s): min= 9072, max=10816, per=38.19%, avg=9866.57, stdev=772.47, samples=7 00:16:10.197 iops : min= 2268, max= 2704, avg=2466.57, stdev=193.20, samples=7 00:16:10.197 lat (usec) : 500=94.92%, 750=4.65%, 1000=0.24% 00:16:10.197 lat (msec) : 2=0.10%, 4=0.01%, 10=0.03%, 20=0.01%, 50=0.01% 00:16:10.197 cpu : usr=1.39%, sys=4.22%, ctx=8995, majf=0, minf=1 00:16:10.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.197 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.197 issued rwts: total=8985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.197 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2120485: Wed Jul 10 15:39:49 2024 00:16:10.197 read: IOPS=2509, BW=9.80MiB/s (10.3MB/s)(31.1MiB/3170msec) 00:16:10.197 slat (nsec): min=5921, max=69533, avg=12243.14, stdev=6365.35 00:16:10.197 clat (usec): min=291, max=41843, avg=380.27, stdev=839.01 00:16:10.197 lat (usec): min=298, max=41851, avg=392.51, stdev=839.42 00:16:10.197 clat percentiles (usec): 00:16:10.197 | 1.00th=[ 302], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 318], 00:16:10.197 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 351], 00:16:10.197 | 70.00th=[ 359], 80.00th=[ 367], 90.00th=[ 392], 95.00th=[ 578], 00:16:10.197 | 99.00th=[ 758], 99.50th=[ 799], 99.90th=[ 1106], 99.95th=[22938], 00:16:10.197 | 99.99th=[41681] 00:16:10.197 bw ( KiB/s): min= 6224, max=11872, per=39.11%, avg=10104.00, stdev=2149.20, samples=6 00:16:10.197 iops : min= 1556, max= 2968, avg=2526.00, stdev=537.30, samples=6 00:16:10.197 lat (usec) : 500=93.00%, 750=5.87%, 1000=1.01% 00:16:10.197 lat (msec) : 2=0.06%, 50=0.05% 00:16:10.197 cpu : usr=2.56%, sys=4.17%, ctx=7956, majf=0, minf=1 00:16:10.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.197 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.197 issued rwts: total=7955,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.197 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2120486: Wed Jul 10 15:39:49 2024 00:16:10.197 read: IOPS=1056, BW=4223KiB/s (4324kB/s)(12.1MiB/2945msec) 00:16:10.197 slat (nsec): min=5021, max=64096, avg=16800.89, stdev=7234.65 00:16:10.197 clat (usec): min=285, max=42498, avg=917.15, stdev=4527.89 00:16:10.197 lat (usec): min=291, max=42513, avg=933.95, stdev=4527.90 00:16:10.197 clat percentiles (usec): 00:16:10.197 | 1.00th=[ 297], 5.00th=[ 314], 10.00th=[ 330], 20.00th=[ 347], 00:16:10.197 | 30.00th=[ 355], 40.00th=[ 367], 50.00th=[ 375], 60.00th=[ 388], 00:16:10.197 | 70.00th=[ 420], 80.00th=[ 482], 90.00th=[ 553], 95.00th=[ 619], 00:16:10.197 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:16:10.197 | 99.99th=[42730] 00:16:10.197 bw ( KiB/s): min= 328, max= 7768, per=19.18%, avg=4956.80, stdev=3041.70, samples=5 00:16:10.197 iops : min= 82, max= 1942, avg=1239.20, stdev=760.43, samples=5 00:16:10.197 lat (usec) : 500=83.15%, 750=15.27%, 1000=0.26% 00:16:10.197 lat (msec) : 2=0.03%, 50=1.25% 00:16:10.197 cpu : usr=1.05%, sys=2.68%, ctx=3111, majf=0, minf=1 00:16:10.197 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:10.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.197 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:10.197 issued rwts: total=3110,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:10.197 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:10.197 00:16:10.197 Run status group 0 (all jobs): 00:16:10.197 READ: bw=25.2MiB/s (26.5MB/s), 4223KiB/s-9.80MiB/s (4324kB/s-10.3MB/s), io=92.6MiB (97.1MB), run=2945-3670msec 00:16:10.197 00:16:10.197 Disk stats (read/write): 00:16:10.197 nvme0n1: ios=3523/0, merge=0/0, ticks=3203/0, in_queue=3203, util=94.82% 00:16:10.197 nvme0n2: ios=8861/0, merge=0/0, ticks=3291/0, in_queue=3291, util=94.32% 00:16:10.197 nvme0n3: ios=7873/0, merge=0/0, ticks=3238/0, in_queue=3238, util=99.94% 00:16:10.197 nvme0n4: ios=3157/0, merge=0/0, ticks=3184/0, in_queue=3184, util=99.97% 00:16:10.454 15:39:49 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:10.454 15:39:49 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:10.712 15:39:50 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:10.712 15:39:50 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:10.969 15:39:50 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:10.969 15:39:50 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:11.227 15:39:50 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:11.227 15:39:50 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:11.484 15:39:50 -- target/fio.sh@69 -- # fio_status=0 00:16:11.484 15:39:50 -- target/fio.sh@70 -- # wait 2120284 00:16:11.484 15:39:50 -- target/fio.sh@70 -- # fio_status=4 00:16:11.484 15:39:50 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:11.742 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.742 15:39:50 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:11.742 15:39:50 -- common/autotest_common.sh@1198 -- # local i=0 00:16:11.742 15:39:50 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:11.742 15:39:50 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.742 15:39:50 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:11.742 15:39:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:11.742 15:39:50 -- common/autotest_common.sh@1210 -- # return 0 00:16:11.742 15:39:50 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:11.742 15:39:50 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:11.742 nvmf hotplug test: fio failed as expected 00:16:11.742 15:39:50 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:11.999 15:39:51 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:11.999 15:39:51 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:11.999 15:39:51 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:11.999 15:39:51 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:11.999 15:39:51 -- target/fio.sh@91 -- # nvmftestfini 00:16:11.999 15:39:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:11.999 15:39:51 -- nvmf/common.sh@116 -- # sync 00:16:11.999 15:39:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:11.999 15:39:51 -- nvmf/common.sh@119 -- # set +e 00:16:11.999 15:39:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:11.999 15:39:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:11.999 rmmod nvme_tcp 00:16:11.999 rmmod nvme_fabrics 00:16:11.999 rmmod nvme_keyring 00:16:11.999 15:39:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:11.999 15:39:51 -- nvmf/common.sh@123 -- # set -e 00:16:12.000 15:39:51 -- nvmf/common.sh@124 -- # return 0 00:16:12.000 15:39:51 -- nvmf/common.sh@477 -- # '[' -n 2118298 ']' 00:16:12.000 15:39:51 -- nvmf/common.sh@478 -- # killprocess 2118298 00:16:12.000 15:39:51 -- common/autotest_common.sh@926 -- # '[' -z 2118298 ']' 00:16:12.000 15:39:51 -- common/autotest_common.sh@930 -- # kill -0 2118298 00:16:12.000 15:39:51 -- common/autotest_common.sh@931 -- # uname 00:16:12.000 15:39:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:12.000 15:39:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2118298 00:16:12.000 15:39:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:12.000 15:39:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:12.000 15:39:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2118298' 00:16:12.000 killing process with pid 2118298 00:16:12.000 15:39:51 -- common/autotest_common.sh@945 -- # kill 2118298 00:16:12.000 15:39:51 -- common/autotest_common.sh@950 -- # wait 2118298 00:16:12.258 15:39:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:12.258 15:39:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:12.259 15:39:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:12.259 15:39:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:12.259 15:39:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:12.259 15:39:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.259 15:39:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:12.259 15:39:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.791 15:39:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:14.791 00:16:14.791 real 0m23.709s 00:16:14.791 user 1m17.204s 00:16:14.791 sys 0m8.790s 00:16:14.791 15:39:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:14.791 15:39:53 -- common/autotest_common.sh@10 -- # set +x 00:16:14.791 ************************************ 00:16:14.791 END TEST nvmf_fio_target 00:16:14.791 ************************************ 00:16:14.791 15:39:53 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:14.791 15:39:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:14.791 15:39:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:14.791 15:39:53 -- common/autotest_common.sh@10 -- # set +x 00:16:14.791 ************************************ 00:16:14.791 START TEST nvmf_bdevio 00:16:14.791 ************************************ 00:16:14.791 15:39:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:14.791 * Looking for test storage... 00:16:14.791 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:14.791 15:39:53 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.791 15:39:53 -- nvmf/common.sh@7 -- # uname -s 00:16:14.791 15:39:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.791 15:39:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.791 15:39:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.791 15:39:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.791 15:39:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.791 15:39:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.791 15:39:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.791 15:39:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.791 15:39:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.791 15:39:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.791 15:39:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.791 15:39:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.791 15:39:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.791 15:39:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.791 15:39:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:14.791 15:39:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:14.791 15:39:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.791 15:39:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.791 15:39:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.791 15:39:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.791 15:39:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.791 15:39:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.791 15:39:53 -- paths/export.sh@5 -- # export PATH 00:16:14.791 15:39:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.791 15:39:53 -- nvmf/common.sh@46 -- # : 0 00:16:14.791 15:39:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:14.791 15:39:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:14.791 15:39:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:14.791 15:39:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.791 15:39:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.791 15:39:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:14.791 15:39:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:14.791 15:39:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:14.791 15:39:53 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:14.791 15:39:53 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:14.791 15:39:53 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:14.791 15:39:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:14.791 15:39:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:14.791 15:39:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:14.791 15:39:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:14.791 15:39:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:14.791 15:39:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:14.791 15:39:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:14.791 15:39:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.791 15:39:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:14.791 15:39:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:14.791 15:39:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:14.791 15:39:53 -- common/autotest_common.sh@10 -- # set +x 00:16:16.751 15:39:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:16.751 15:39:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:16.751 15:39:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:16.751 15:39:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:16.751 15:39:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:16.751 15:39:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:16.751 15:39:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:16.751 15:39:55 -- nvmf/common.sh@294 -- # net_devs=() 00:16:16.751 15:39:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:16.751 15:39:55 -- nvmf/common.sh@295 -- # e810=() 00:16:16.751 15:39:55 -- nvmf/common.sh@295 -- # local -ga e810 00:16:16.751 15:39:55 -- nvmf/common.sh@296 -- # x722=() 00:16:16.751 15:39:55 -- nvmf/common.sh@296 -- # local -ga x722 00:16:16.751 15:39:55 -- nvmf/common.sh@297 -- # mlx=() 00:16:16.751 15:39:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:16.751 15:39:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:16.751 15:39:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:16.751 15:39:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:16.751 15:39:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:16.751 15:39:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:16.751 15:39:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:16.751 15:39:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:16.751 15:39:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:16.751 15:39:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:16.751 15:39:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:16.751 15:39:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:16.751 15:39:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:16.751 15:39:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:16.751 15:39:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:16.751 15:39:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:16.751 15:39:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:16.751 15:39:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:16.751 15:39:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:16.751 15:39:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:16.751 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:16.751 15:39:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:16.751 15:39:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:16.751 15:39:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.751 15:39:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.751 15:39:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:16.751 15:39:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:16.751 15:39:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:16.751 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:16.751 15:39:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:16.751 15:39:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:16.751 15:39:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:16.751 15:39:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:16.751 15:39:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:16.751 15:39:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:16.751 15:39:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:16.751 15:39:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:16.751 15:39:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:16.751 15:39:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.751 15:39:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:16.751 15:39:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.751 15:39:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:16.751 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:16.751 15:39:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.751 15:39:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:16.751 15:39:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:16.751 15:39:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:16.751 15:39:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:16.751 15:39:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:16.751 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:16.751 15:39:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:16.751 15:39:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:16.751 15:39:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:16.751 15:39:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:16.751 15:39:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:16.751 15:39:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:16.751 15:39:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.751 15:39:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.751 15:39:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:16.751 15:39:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:16.751 15:39:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:16.751 15:39:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:16.751 15:39:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:16.751 15:39:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:16.751 15:39:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.751 15:39:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:16.751 15:39:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:16.751 15:39:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:16.751 15:39:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:16.751 15:39:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:16.751 15:39:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:16.751 15:39:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:16.751 15:39:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:16.751 15:39:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:16.751 15:39:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:16.751 15:39:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:16.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:16.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:16:16.751 00:16:16.751 --- 10.0.0.2 ping statistics --- 00:16:16.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.751 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:16:16.751 15:39:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:16.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:16.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:16:16.751 00:16:16.751 --- 10.0.0.1 ping statistics --- 00:16:16.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:16.751 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:16:16.751 15:39:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:16.751 15:39:55 -- nvmf/common.sh@410 -- # return 0 00:16:16.751 15:39:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:16.751 15:39:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:16.751 15:39:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:16.751 15:39:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:16.751 15:39:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:16.751 15:39:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:16.751 15:39:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:16.751 15:39:55 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:16.751 15:39:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:16.751 15:39:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:16.751 15:39:55 -- common/autotest_common.sh@10 -- # set +x 00:16:16.751 15:39:55 -- nvmf/common.sh@469 -- # nvmfpid=2123139 00:16:16.751 15:39:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:16.751 15:39:55 -- nvmf/common.sh@470 -- # waitforlisten 2123139 00:16:16.751 15:39:55 -- common/autotest_common.sh@819 -- # '[' -z 2123139 ']' 00:16:16.751 15:39:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.751 15:39:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:16.751 15:39:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.751 15:39:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:16.751 15:39:55 -- common/autotest_common.sh@10 -- # set +x 00:16:16.751 [2024-07-10 15:39:55.914084] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:16.751 [2024-07-10 15:39:55.914167] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:16.752 EAL: No free 2048 kB hugepages reported on node 1 00:16:16.752 [2024-07-10 15:39:55.983417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:16.752 [2024-07-10 15:39:56.103037] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:16.752 [2024-07-10 15:39:56.103204] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:16.752 [2024-07-10 15:39:56.103224] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:16.752 [2024-07-10 15:39:56.103238] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:16.752 [2024-07-10 15:39:56.103336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:16.752 [2024-07-10 15:39:56.103392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:16.752 [2024-07-10 15:39:56.103453] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:16.752 [2024-07-10 15:39:56.103457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:17.685 15:39:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:17.685 15:39:56 -- common/autotest_common.sh@852 -- # return 0 00:16:17.685 15:39:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:17.685 15:39:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:17.685 15:39:56 -- common/autotest_common.sh@10 -- # set +x 00:16:17.685 15:39:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.685 15:39:56 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:17.685 15:39:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.685 15:39:56 -- common/autotest_common.sh@10 -- # set +x 00:16:17.685 [2024-07-10 15:39:56.854801] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:17.685 15:39:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.685 15:39:56 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:17.685 15:39:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.685 15:39:56 -- common/autotest_common.sh@10 -- # set +x 00:16:17.685 Malloc0 00:16:17.685 15:39:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.685 15:39:56 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:17.685 15:39:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.685 15:39:56 -- common/autotest_common.sh@10 -- # set +x 00:16:17.685 15:39:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.685 15:39:56 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:17.685 15:39:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.685 15:39:56 -- common/autotest_common.sh@10 -- # set +x 00:16:17.685 15:39:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.685 15:39:56 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:17.685 15:39:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:17.685 15:39:56 -- common/autotest_common.sh@10 -- # set +x 00:16:17.685 [2024-07-10 15:39:56.907012] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:17.685 15:39:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:17.685 15:39:56 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:17.685 15:39:56 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:17.685 15:39:56 -- nvmf/common.sh@520 -- # config=() 00:16:17.685 15:39:56 -- nvmf/common.sh@520 -- # local subsystem config 00:16:17.685 15:39:56 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:17.685 15:39:56 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:17.685 { 00:16:17.685 "params": { 00:16:17.685 "name": "Nvme$subsystem", 00:16:17.685 "trtype": "$TEST_TRANSPORT", 00:16:17.685 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:17.685 "adrfam": "ipv4", 00:16:17.685 "trsvcid": "$NVMF_PORT", 00:16:17.685 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:17.685 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:17.685 "hdgst": ${hdgst:-false}, 00:16:17.685 "ddgst": ${ddgst:-false} 00:16:17.685 }, 00:16:17.685 "method": "bdev_nvme_attach_controller" 00:16:17.685 } 00:16:17.685 EOF 00:16:17.685 )") 00:16:17.685 15:39:56 -- nvmf/common.sh@542 -- # cat 00:16:17.685 15:39:56 -- nvmf/common.sh@544 -- # jq . 00:16:17.685 15:39:56 -- nvmf/common.sh@545 -- # IFS=, 00:16:17.685 15:39:56 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:17.685 "params": { 00:16:17.685 "name": "Nvme1", 00:16:17.685 "trtype": "tcp", 00:16:17.685 "traddr": "10.0.0.2", 00:16:17.685 "adrfam": "ipv4", 00:16:17.685 "trsvcid": "4420", 00:16:17.685 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:17.685 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:17.685 "hdgst": false, 00:16:17.685 "ddgst": false 00:16:17.685 }, 00:16:17.685 "method": "bdev_nvme_attach_controller" 00:16:17.685 }' 00:16:17.685 [2024-07-10 15:39:56.948289] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:17.685 [2024-07-10 15:39:56.948371] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2123253 ] 00:16:17.685 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.685 [2024-07-10 15:39:57.011055] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:17.943 [2024-07-10 15:39:57.125818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.943 [2024-07-10 15:39:57.125871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.943 [2024-07-10 15:39:57.125874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:18.201 [2024-07-10 15:39:57.348448] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:18.201 [2024-07-10 15:39:57.348502] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:18.201 I/O targets: 00:16:18.201 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:18.201 00:16:18.201 00:16:18.201 CUnit - A unit testing framework for C - Version 2.1-3 00:16:18.201 http://cunit.sourceforge.net/ 00:16:18.201 00:16:18.201 00:16:18.201 Suite: bdevio tests on: Nvme1n1 00:16:18.201 Test: blockdev write read block ...passed 00:16:18.201 Test: blockdev write zeroes read block ...passed 00:16:18.201 Test: blockdev write zeroes read no split ...passed 00:16:18.201 Test: blockdev write zeroes read split ...passed 00:16:18.201 Test: blockdev write zeroes read split partial ...passed 00:16:18.201 Test: blockdev reset ...[2024-07-10 15:39:57.480545] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:18.201 [2024-07-10 15:39:57.480653] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf3180 (9): Bad file descriptor 00:16:18.201 [2024-07-10 15:39:57.539119] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:18.201 passed 00:16:18.459 Test: blockdev write read 8 blocks ...passed 00:16:18.459 Test: blockdev write read size > 128k ...passed 00:16:18.459 Test: blockdev write read invalid size ...passed 00:16:18.459 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:18.459 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:18.459 Test: blockdev write read max offset ...passed 00:16:18.459 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:18.459 Test: blockdev writev readv 8 blocks ...passed 00:16:18.459 Test: blockdev writev readv 30 x 1block ...passed 00:16:18.459 Test: blockdev writev readv block ...passed 00:16:18.459 Test: blockdev writev readv size > 128k ...passed 00:16:18.459 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:18.459 Test: blockdev comparev and writev ...[2024-07-10 15:39:57.754528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.459 [2024-07-10 15:39:57.754563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:18.459 [2024-07-10 15:39:57.754588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.459 [2024-07-10 15:39:57.754605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:18.459 [2024-07-10 15:39:57.754977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.459 [2024-07-10 15:39:57.755001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:18.459 [2024-07-10 15:39:57.755023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.459 [2024-07-10 15:39:57.755039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:18.459 [2024-07-10 15:39:57.755395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.459 [2024-07-10 15:39:57.755418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:18.459 [2024-07-10 15:39:57.755449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.459 [2024-07-10 15:39:57.755466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:18.459 [2024-07-10 15:39:57.755809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.459 [2024-07-10 15:39:57.755833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:18.459 [2024-07-10 15:39:57.755855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:18.459 [2024-07-10 15:39:57.755871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:18.459 passed 00:16:18.717 Test: blockdev nvme passthru rw ...passed 00:16:18.717 Test: blockdev nvme passthru vendor specific ...[2024-07-10 15:39:57.837793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:18.717 [2024-07-10 15:39:57.837821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:18.717 [2024-07-10 15:39:57.838025] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:18.717 [2024-07-10 15:39:57.838049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:18.717 [2024-07-10 15:39:57.838252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:18.717 [2024-07-10 15:39:57.838275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:18.717 [2024-07-10 15:39:57.838478] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:18.717 [2024-07-10 15:39:57.838501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:18.717 passed 00:16:18.717 Test: blockdev nvme admin passthru ...passed 00:16:18.717 Test: blockdev copy ...passed 00:16:18.717 00:16:18.717 Run Summary: Type Total Ran Passed Failed Inactive 00:16:18.717 suites 1 1 n/a 0 0 00:16:18.717 tests 23 23 23 0 0 00:16:18.717 asserts 152 152 152 0 n/a 00:16:18.717 00:16:18.717 Elapsed time = 1.084 seconds 00:16:18.975 15:39:58 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:18.975 15:39:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.975 15:39:58 -- common/autotest_common.sh@10 -- # set +x 00:16:18.975 15:39:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.975 15:39:58 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:18.975 15:39:58 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:18.975 15:39:58 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:18.975 15:39:58 -- nvmf/common.sh@116 -- # sync 00:16:18.975 15:39:58 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:18.975 15:39:58 -- nvmf/common.sh@119 -- # set +e 00:16:18.975 15:39:58 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:18.975 15:39:58 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:18.975 rmmod nvme_tcp 00:16:18.975 rmmod nvme_fabrics 00:16:18.975 rmmod nvme_keyring 00:16:18.975 15:39:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:18.975 15:39:58 -- nvmf/common.sh@123 -- # set -e 00:16:18.975 15:39:58 -- nvmf/common.sh@124 -- # return 0 00:16:18.975 15:39:58 -- nvmf/common.sh@477 -- # '[' -n 2123139 ']' 00:16:18.975 15:39:58 -- nvmf/common.sh@478 -- # killprocess 2123139 00:16:18.975 15:39:58 -- common/autotest_common.sh@926 -- # '[' -z 2123139 ']' 00:16:18.975 15:39:58 -- common/autotest_common.sh@930 -- # kill -0 2123139 00:16:18.975 15:39:58 -- common/autotest_common.sh@931 -- # uname 00:16:18.975 15:39:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:18.975 15:39:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2123139 00:16:18.975 15:39:58 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:18.975 15:39:58 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:18.975 15:39:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2123139' 00:16:18.975 killing process with pid 2123139 00:16:18.975 15:39:58 -- common/autotest_common.sh@945 -- # kill 2123139 00:16:18.975 15:39:58 -- common/autotest_common.sh@950 -- # wait 2123139 00:16:19.233 15:39:58 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:19.233 15:39:58 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:19.233 15:39:58 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:19.233 15:39:58 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:19.233 15:39:58 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:19.233 15:39:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.233 15:39:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.233 15:39:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.762 15:40:00 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:21.762 00:16:21.762 real 0m6.898s 00:16:21.762 user 0m12.466s 00:16:21.762 sys 0m2.070s 00:16:21.762 15:40:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:21.762 15:40:00 -- common/autotest_common.sh@10 -- # set +x 00:16:21.762 ************************************ 00:16:21.762 END TEST nvmf_bdevio 00:16:21.762 ************************************ 00:16:21.762 15:40:00 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:21.762 15:40:00 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:21.762 15:40:00 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:21.762 15:40:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:21.762 15:40:00 -- common/autotest_common.sh@10 -- # set +x 00:16:21.762 ************************************ 00:16:21.762 START TEST nvmf_bdevio_no_huge 00:16:21.762 ************************************ 00:16:21.762 15:40:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:21.762 * Looking for test storage... 00:16:21.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:21.762 15:40:00 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:21.762 15:40:00 -- nvmf/common.sh@7 -- # uname -s 00:16:21.762 15:40:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:21.762 15:40:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:21.762 15:40:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:21.762 15:40:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:21.762 15:40:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:21.762 15:40:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:21.762 15:40:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:21.762 15:40:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:21.762 15:40:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:21.762 15:40:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:21.762 15:40:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:21.762 15:40:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:21.762 15:40:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:21.762 15:40:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:21.762 15:40:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:21.762 15:40:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:21.762 15:40:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:21.762 15:40:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:21.762 15:40:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:21.762 15:40:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.762 15:40:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.762 15:40:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.762 15:40:00 -- paths/export.sh@5 -- # export PATH 00:16:21.762 15:40:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:21.762 15:40:00 -- nvmf/common.sh@46 -- # : 0 00:16:21.762 15:40:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:21.762 15:40:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:21.762 15:40:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:21.762 15:40:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:21.762 15:40:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:21.762 15:40:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:21.762 15:40:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:21.762 15:40:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:21.762 15:40:00 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:21.762 15:40:00 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:21.762 15:40:00 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:21.762 15:40:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:21.762 15:40:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:21.762 15:40:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:21.762 15:40:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:21.762 15:40:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:21.762 15:40:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.762 15:40:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.762 15:40:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:21.762 15:40:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:21.762 15:40:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:21.762 15:40:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:21.762 15:40:00 -- common/autotest_common.sh@10 -- # set +x 00:16:23.660 15:40:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:23.660 15:40:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:23.660 15:40:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:23.660 15:40:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:23.660 15:40:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:23.660 15:40:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:23.660 15:40:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:23.660 15:40:02 -- nvmf/common.sh@294 -- # net_devs=() 00:16:23.660 15:40:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:23.660 15:40:02 -- nvmf/common.sh@295 -- # e810=() 00:16:23.660 15:40:02 -- nvmf/common.sh@295 -- # local -ga e810 00:16:23.660 15:40:02 -- nvmf/common.sh@296 -- # x722=() 00:16:23.660 15:40:02 -- nvmf/common.sh@296 -- # local -ga x722 00:16:23.660 15:40:02 -- nvmf/common.sh@297 -- # mlx=() 00:16:23.660 15:40:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:23.660 15:40:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:23.660 15:40:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:23.661 15:40:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:23.661 15:40:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:23.661 15:40:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:23.661 15:40:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:23.661 15:40:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:23.661 15:40:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:23.661 15:40:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:23.661 15:40:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:23.661 15:40:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:23.661 15:40:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:23.661 15:40:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:23.661 15:40:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:23.661 15:40:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:23.661 15:40:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:23.661 15:40:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:23.661 15:40:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:23.661 15:40:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:23.661 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:23.661 15:40:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:23.661 15:40:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:23.661 15:40:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.661 15:40:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.661 15:40:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:23.661 15:40:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:23.661 15:40:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:23.661 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:23.661 15:40:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:23.661 15:40:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:23.661 15:40:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.661 15:40:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.661 15:40:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:23.661 15:40:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:23.661 15:40:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:23.661 15:40:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:23.661 15:40:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:23.661 15:40:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.661 15:40:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:23.661 15:40:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.661 15:40:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:23.661 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:23.661 15:40:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.661 15:40:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:23.661 15:40:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.661 15:40:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:23.661 15:40:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.661 15:40:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:23.661 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:23.661 15:40:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.661 15:40:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:23.661 15:40:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:23.661 15:40:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:23.661 15:40:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:23.661 15:40:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:23.661 15:40:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.661 15:40:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.661 15:40:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:23.661 15:40:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:23.661 15:40:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:23.661 15:40:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:23.661 15:40:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:23.661 15:40:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:23.661 15:40:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.661 15:40:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:23.661 15:40:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:23.661 15:40:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:23.661 15:40:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:23.661 15:40:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:23.661 15:40:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:23.661 15:40:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:23.661 15:40:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:23.661 15:40:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:23.661 15:40:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:23.661 15:40:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:23.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:16:23.661 00:16:23.661 --- 10.0.0.2 ping statistics --- 00:16:23.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.661 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:16:23.661 15:40:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:23.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:16:23.661 00:16:23.661 --- 10.0.0.1 ping statistics --- 00:16:23.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.661 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:16:23.661 15:40:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.661 15:40:02 -- nvmf/common.sh@410 -- # return 0 00:16:23.661 15:40:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:23.661 15:40:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.661 15:40:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:23.661 15:40:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:23.661 15:40:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.661 15:40:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:23.661 15:40:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:23.661 15:40:02 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:23.661 15:40:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:23.661 15:40:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:23.661 15:40:02 -- common/autotest_common.sh@10 -- # set +x 00:16:23.661 15:40:02 -- nvmf/common.sh@469 -- # nvmfpid=2125387 00:16:23.661 15:40:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:23.661 15:40:02 -- nvmf/common.sh@470 -- # waitforlisten 2125387 00:16:23.661 15:40:02 -- common/autotest_common.sh@819 -- # '[' -z 2125387 ']' 00:16:23.661 15:40:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.661 15:40:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:23.661 15:40:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.661 15:40:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:23.661 15:40:02 -- common/autotest_common.sh@10 -- # set +x 00:16:23.661 [2024-07-10 15:40:02.795379] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:23.661 [2024-07-10 15:40:02.795489] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:23.661 [2024-07-10 15:40:02.866536] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:23.661 [2024-07-10 15:40:02.966895] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:23.661 [2024-07-10 15:40:02.967047] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.661 [2024-07-10 15:40:02.967072] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.661 [2024-07-10 15:40:02.967092] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.661 [2024-07-10 15:40:02.967216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:23.661 [2024-07-10 15:40:02.967302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:23.661 [2024-07-10 15:40:02.967304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.661 [2024-07-10 15:40:02.967244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:24.593 15:40:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:24.593 15:40:03 -- common/autotest_common.sh@852 -- # return 0 00:16:24.593 15:40:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:24.593 15:40:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:24.593 15:40:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.593 15:40:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:24.593 15:40:03 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:24.593 15:40:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.593 15:40:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.593 [2024-07-10 15:40:03.794517] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:24.593 15:40:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.593 15:40:03 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:24.593 15:40:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.593 15:40:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.593 Malloc0 00:16:24.593 15:40:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.593 15:40:03 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:24.593 15:40:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.593 15:40:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.593 15:40:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.593 15:40:03 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:24.593 15:40:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.593 15:40:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.593 15:40:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.593 15:40:03 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:24.593 15:40:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.593 15:40:03 -- common/autotest_common.sh@10 -- # set +x 00:16:24.593 [2024-07-10 15:40:03.833053] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:24.593 15:40:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.593 15:40:03 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:24.593 15:40:03 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:24.593 15:40:03 -- nvmf/common.sh@520 -- # config=() 00:16:24.593 15:40:03 -- nvmf/common.sh@520 -- # local subsystem config 00:16:24.593 15:40:03 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:24.593 15:40:03 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:24.593 { 00:16:24.593 "params": { 00:16:24.593 "name": "Nvme$subsystem", 00:16:24.593 "trtype": "$TEST_TRANSPORT", 00:16:24.593 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:24.593 "adrfam": "ipv4", 00:16:24.593 "trsvcid": "$NVMF_PORT", 00:16:24.593 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:24.593 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:24.593 "hdgst": ${hdgst:-false}, 00:16:24.593 "ddgst": ${ddgst:-false} 00:16:24.593 }, 00:16:24.593 "method": "bdev_nvme_attach_controller" 00:16:24.593 } 00:16:24.593 EOF 00:16:24.593 )") 00:16:24.593 15:40:03 -- nvmf/common.sh@542 -- # cat 00:16:24.593 15:40:03 -- nvmf/common.sh@544 -- # jq . 00:16:24.593 15:40:03 -- nvmf/common.sh@545 -- # IFS=, 00:16:24.593 15:40:03 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:24.593 "params": { 00:16:24.593 "name": "Nvme1", 00:16:24.593 "trtype": "tcp", 00:16:24.593 "traddr": "10.0.0.2", 00:16:24.593 "adrfam": "ipv4", 00:16:24.593 "trsvcid": "4420", 00:16:24.593 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:24.593 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:24.593 "hdgst": false, 00:16:24.593 "ddgst": false 00:16:24.593 }, 00:16:24.593 "method": "bdev_nvme_attach_controller" 00:16:24.593 }' 00:16:24.593 [2024-07-10 15:40:03.875704] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:24.594 [2024-07-10 15:40:03.875784] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2125546 ] 00:16:24.594 [2024-07-10 15:40:03.938471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:24.851 [2024-07-10 15:40:04.053077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:24.851 [2024-07-10 15:40:04.053131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:24.851 [2024-07-10 15:40:04.053134] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.108 [2024-07-10 15:40:04.327236] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:25.108 [2024-07-10 15:40:04.327285] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:25.108 I/O targets: 00:16:25.108 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:25.108 00:16:25.108 00:16:25.108 CUnit - A unit testing framework for C - Version 2.1-3 00:16:25.108 http://cunit.sourceforge.net/ 00:16:25.108 00:16:25.108 00:16:25.108 Suite: bdevio tests on: Nvme1n1 00:16:25.108 Test: blockdev write read block ...passed 00:16:25.108 Test: blockdev write zeroes read block ...passed 00:16:25.108 Test: blockdev write zeroes read no split ...passed 00:16:25.108 Test: blockdev write zeroes read split ...passed 00:16:25.365 Test: blockdev write zeroes read split partial ...passed 00:16:25.365 Test: blockdev reset ...[2024-07-10 15:40:04.539905] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:25.365 [2024-07-10 15:40:04.540013] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17cbb00 (9): Bad file descriptor 00:16:25.365 [2024-07-10 15:40:04.717766] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:25.366 passed 00:16:25.366 Test: blockdev write read 8 blocks ...passed 00:16:25.366 Test: blockdev write read size > 128k ...passed 00:16:25.366 Test: blockdev write read invalid size ...passed 00:16:25.624 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:25.624 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:25.624 Test: blockdev write read max offset ...passed 00:16:25.624 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:25.624 Test: blockdev writev readv 8 blocks ...passed 00:16:25.624 Test: blockdev writev readv 30 x 1block ...passed 00:16:25.624 Test: blockdev writev readv block ...passed 00:16:25.624 Test: blockdev writev readv size > 128k ...passed 00:16:25.624 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:25.624 Test: blockdev comparev and writev ...[2024-07-10 15:40:04.891703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.624 [2024-07-10 15:40:04.891738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:25.624 [2024-07-10 15:40:04.891762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.624 [2024-07-10 15:40:04.891778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:25.624 [2024-07-10 15:40:04.892179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.624 [2024-07-10 15:40:04.892203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:25.624 [2024-07-10 15:40:04.892224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.624 [2024-07-10 15:40:04.892240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:25.624 [2024-07-10 15:40:04.892616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.624 [2024-07-10 15:40:04.892641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:25.624 [2024-07-10 15:40:04.892662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.624 [2024-07-10 15:40:04.892677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:25.624 [2024-07-10 15:40:04.893048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.624 [2024-07-10 15:40:04.893072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:25.624 [2024-07-10 15:40:04.893093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:25.624 [2024-07-10 15:40:04.893108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:25.624 passed 00:16:25.624 Test: blockdev nvme passthru rw ...passed 00:16:25.624 Test: blockdev nvme passthru vendor specific ...[2024-07-10 15:40:04.974792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:25.624 [2024-07-10 15:40:04.974819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:25.624 [2024-07-10 15:40:04.975009] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:25.624 [2024-07-10 15:40:04.975041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:25.624 [2024-07-10 15:40:04.975231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:25.624 [2024-07-10 15:40:04.975254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:25.624 [2024-07-10 15:40:04.975449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:25.625 [2024-07-10 15:40:04.975482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:25.625 passed 00:16:25.625 Test: blockdev nvme admin passthru ...passed 00:16:25.883 Test: blockdev copy ...passed 00:16:25.883 00:16:25.883 Run Summary: Type Total Ran Passed Failed Inactive 00:16:25.883 suites 1 1 n/a 0 0 00:16:25.883 tests 23 23 23 0 0 00:16:25.883 asserts 152 152 152 0 n/a 00:16:25.883 00:16:25.883 Elapsed time = 1.402 seconds 00:16:26.141 15:40:05 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:26.141 15:40:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:26.141 15:40:05 -- common/autotest_common.sh@10 -- # set +x 00:16:26.141 15:40:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:26.141 15:40:05 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:26.141 15:40:05 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:26.141 15:40:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:26.141 15:40:05 -- nvmf/common.sh@116 -- # sync 00:16:26.141 15:40:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:26.141 15:40:05 -- nvmf/common.sh@119 -- # set +e 00:16:26.141 15:40:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:26.141 15:40:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:26.141 rmmod nvme_tcp 00:16:26.141 rmmod nvme_fabrics 00:16:26.141 rmmod nvme_keyring 00:16:26.141 15:40:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:26.141 15:40:05 -- nvmf/common.sh@123 -- # set -e 00:16:26.141 15:40:05 -- nvmf/common.sh@124 -- # return 0 00:16:26.141 15:40:05 -- nvmf/common.sh@477 -- # '[' -n 2125387 ']' 00:16:26.141 15:40:05 -- nvmf/common.sh@478 -- # killprocess 2125387 00:16:26.141 15:40:05 -- common/autotest_common.sh@926 -- # '[' -z 2125387 ']' 00:16:26.142 15:40:05 -- common/autotest_common.sh@930 -- # kill -0 2125387 00:16:26.142 15:40:05 -- common/autotest_common.sh@931 -- # uname 00:16:26.142 15:40:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:26.142 15:40:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2125387 00:16:26.142 15:40:05 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:26.142 15:40:05 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:26.142 15:40:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2125387' 00:16:26.142 killing process with pid 2125387 00:16:26.142 15:40:05 -- common/autotest_common.sh@945 -- # kill 2125387 00:16:26.142 15:40:05 -- common/autotest_common.sh@950 -- # wait 2125387 00:16:26.708 15:40:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:26.708 15:40:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:26.708 15:40:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:26.708 15:40:05 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:26.708 15:40:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:26.708 15:40:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.708 15:40:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.708 15:40:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.611 15:40:07 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:28.611 00:16:28.611 real 0m7.397s 00:16:28.611 user 0m14.935s 00:16:28.611 sys 0m2.541s 00:16:28.611 15:40:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:28.611 15:40:07 -- common/autotest_common.sh@10 -- # set +x 00:16:28.611 ************************************ 00:16:28.611 END TEST nvmf_bdevio_no_huge 00:16:28.611 ************************************ 00:16:28.869 15:40:07 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:28.869 15:40:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:28.869 15:40:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:28.869 15:40:07 -- common/autotest_common.sh@10 -- # set +x 00:16:28.869 ************************************ 00:16:28.869 START TEST nvmf_tls 00:16:28.869 ************************************ 00:16:28.869 15:40:07 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:28.869 * Looking for test storage... 00:16:28.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:28.869 15:40:08 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:28.869 15:40:08 -- nvmf/common.sh@7 -- # uname -s 00:16:28.870 15:40:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:28.870 15:40:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:28.870 15:40:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:28.870 15:40:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:28.870 15:40:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:28.870 15:40:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:28.870 15:40:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:28.870 15:40:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:28.870 15:40:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:28.870 15:40:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:28.870 15:40:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.870 15:40:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.870 15:40:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:28.870 15:40:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:28.870 15:40:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:28.870 15:40:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:28.870 15:40:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:28.870 15:40:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:28.870 15:40:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:28.870 15:40:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.870 15:40:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.870 15:40:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.870 15:40:08 -- paths/export.sh@5 -- # export PATH 00:16:28.870 15:40:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:28.870 15:40:08 -- nvmf/common.sh@46 -- # : 0 00:16:28.870 15:40:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:28.870 15:40:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:28.870 15:40:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:28.870 15:40:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:28.870 15:40:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:28.870 15:40:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:28.870 15:40:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:28.870 15:40:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:28.870 15:40:08 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:28.870 15:40:08 -- target/tls.sh@71 -- # nvmftestinit 00:16:28.870 15:40:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:28.870 15:40:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:28.870 15:40:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:28.870 15:40:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:28.870 15:40:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:28.870 15:40:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:28.870 15:40:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:28.870 15:40:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:28.870 15:40:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:28.870 15:40:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:28.870 15:40:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:28.870 15:40:08 -- common/autotest_common.sh@10 -- # set +x 00:16:30.772 15:40:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:30.772 15:40:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:30.772 15:40:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:30.772 15:40:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:30.772 15:40:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:30.772 15:40:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:30.772 15:40:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:30.772 15:40:10 -- nvmf/common.sh@294 -- # net_devs=() 00:16:30.772 15:40:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:30.772 15:40:10 -- nvmf/common.sh@295 -- # e810=() 00:16:30.772 15:40:10 -- nvmf/common.sh@295 -- # local -ga e810 00:16:30.772 15:40:10 -- nvmf/common.sh@296 -- # x722=() 00:16:30.772 15:40:10 -- nvmf/common.sh@296 -- # local -ga x722 00:16:30.772 15:40:10 -- nvmf/common.sh@297 -- # mlx=() 00:16:30.772 15:40:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:30.772 15:40:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:30.772 15:40:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:30.772 15:40:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:30.772 15:40:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:30.772 15:40:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:30.772 15:40:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:30.772 15:40:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:30.772 15:40:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:30.772 15:40:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:30.772 15:40:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:30.772 15:40:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:30.772 15:40:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:30.772 15:40:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:30.772 15:40:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:30.772 15:40:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:30.772 15:40:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:30.772 15:40:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:30.772 15:40:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:30.772 15:40:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:30.772 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:30.772 15:40:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:30.772 15:40:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:30.772 15:40:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.772 15:40:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.772 15:40:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:30.772 15:40:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:30.772 15:40:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:30.772 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:30.772 15:40:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:30.772 15:40:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:30.772 15:40:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:30.772 15:40:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:30.772 15:40:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:30.772 15:40:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:30.772 15:40:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:30.772 15:40:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:30.772 15:40:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:30.772 15:40:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.772 15:40:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:30.772 15:40:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.772 15:40:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:30.772 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:30.772 15:40:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.772 15:40:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:30.772 15:40:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:30.772 15:40:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:30.772 15:40:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:30.772 15:40:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:30.772 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:30.772 15:40:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:30.772 15:40:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:30.772 15:40:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:30.772 15:40:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:30.772 15:40:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:30.772 15:40:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:30.772 15:40:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.772 15:40:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.772 15:40:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:30.772 15:40:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:30.772 15:40:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:30.773 15:40:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:30.773 15:40:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:30.773 15:40:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:30.773 15:40:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.773 15:40:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:30.773 15:40:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:30.773 15:40:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:30.773 15:40:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:30.773 15:40:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:30.773 15:40:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:30.773 15:40:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:30.773 15:40:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:31.031 15:40:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:31.031 15:40:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:31.031 15:40:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:31.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:16:31.031 00:16:31.031 --- 10.0.0.2 ping statistics --- 00:16:31.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.031 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:16:31.031 15:40:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:31.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:16:31.031 00:16:31.031 --- 10.0.0.1 ping statistics --- 00:16:31.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.031 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:16:31.031 15:40:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.031 15:40:10 -- nvmf/common.sh@410 -- # return 0 00:16:31.031 15:40:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:31.031 15:40:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.031 15:40:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:31.031 15:40:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:31.031 15:40:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.031 15:40:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:31.031 15:40:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:31.031 15:40:10 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:31.031 15:40:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:31.031 15:40:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:31.031 15:40:10 -- common/autotest_common.sh@10 -- # set +x 00:16:31.031 15:40:10 -- nvmf/common.sh@469 -- # nvmfpid=2127632 00:16:31.031 15:40:10 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:31.031 15:40:10 -- nvmf/common.sh@470 -- # waitforlisten 2127632 00:16:31.031 15:40:10 -- common/autotest_common.sh@819 -- # '[' -z 2127632 ']' 00:16:31.031 15:40:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.031 15:40:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:31.031 15:40:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.031 15:40:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:31.031 15:40:10 -- common/autotest_common.sh@10 -- # set +x 00:16:31.031 [2024-07-10 15:40:10.254312] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:31.031 [2024-07-10 15:40:10.254399] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.031 EAL: No free 2048 kB hugepages reported on node 1 00:16:31.031 [2024-07-10 15:40:10.335082] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.289 [2024-07-10 15:40:10.476249] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:31.289 [2024-07-10 15:40:10.476447] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.289 [2024-07-10 15:40:10.476494] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.289 [2024-07-10 15:40:10.476513] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.289 [2024-07-10 15:40:10.476557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.223 15:40:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:32.223 15:40:11 -- common/autotest_common.sh@852 -- # return 0 00:16:32.223 15:40:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:32.223 15:40:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:32.223 15:40:11 -- common/autotest_common.sh@10 -- # set +x 00:16:32.223 15:40:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.223 15:40:11 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:32.223 15:40:11 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:32.223 true 00:16:32.223 15:40:11 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:32.223 15:40:11 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:32.481 15:40:11 -- target/tls.sh@82 -- # version=0 00:16:32.481 15:40:11 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:32.481 15:40:11 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:32.739 15:40:11 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:32.739 15:40:11 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:32.997 15:40:12 -- target/tls.sh@90 -- # version=13 00:16:32.997 15:40:12 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:32.997 15:40:12 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:33.255 15:40:12 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:33.255 15:40:12 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:33.513 15:40:12 -- target/tls.sh@98 -- # version=7 00:16:33.513 15:40:12 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:33.513 15:40:12 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:33.513 15:40:12 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:33.771 15:40:13 -- target/tls.sh@105 -- # ktls=false 00:16:33.771 15:40:13 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:33.771 15:40:13 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:34.029 15:40:13 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:34.029 15:40:13 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:34.286 15:40:13 -- target/tls.sh@113 -- # ktls=true 00:16:34.286 15:40:13 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:34.286 15:40:13 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:34.543 15:40:13 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:34.543 15:40:13 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:34.801 15:40:14 -- target/tls.sh@121 -- # ktls=false 00:16:34.801 15:40:14 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:34.801 15:40:14 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:34.801 15:40:14 -- target/tls.sh@49 -- # local key hash crc 00:16:34.801 15:40:14 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:34.801 15:40:14 -- target/tls.sh@51 -- # hash=01 00:16:34.801 15:40:14 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:34.801 15:40:14 -- target/tls.sh@52 -- # gzip -1 -c 00:16:34.801 15:40:14 -- target/tls.sh@52 -- # tail -c8 00:16:34.801 15:40:14 -- target/tls.sh@52 -- # head -c 4 00:16:34.801 15:40:14 -- target/tls.sh@52 -- # crc='p$H�' 00:16:34.801 15:40:14 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:34.801 15:40:14 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:34.801 15:40:14 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:34.801 15:40:14 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:34.801 15:40:14 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:34.801 15:40:14 -- target/tls.sh@49 -- # local key hash crc 00:16:34.801 15:40:14 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:34.801 15:40:14 -- target/tls.sh@51 -- # hash=01 00:16:34.801 15:40:14 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:34.801 15:40:14 -- target/tls.sh@52 -- # gzip -1 -c 00:16:34.801 15:40:14 -- target/tls.sh@52 -- # tail -c8 00:16:34.801 15:40:14 -- target/tls.sh@52 -- # head -c 4 00:16:34.801 15:40:14 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:34.801 15:40:14 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:34.801 15:40:14 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:34.801 15:40:14 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:34.801 15:40:14 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:34.801 15:40:14 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:34.801 15:40:14 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:34.801 15:40:14 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:34.802 15:40:14 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:34.802 15:40:14 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:34.802 15:40:14 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:34.802 15:40:14 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:35.368 15:40:14 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:16:35.626 15:40:14 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:35.626 15:40:14 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:35.626 15:40:14 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:35.884 [2024-07-10 15:40:15.055870] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.884 15:40:15 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:36.141 15:40:15 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:36.399 [2024-07-10 15:40:15.525178] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:36.399 [2024-07-10 15:40:15.525410] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.399 15:40:15 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:36.657 malloc0 00:16:36.657 15:40:15 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:36.914 15:40:16 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:37.172 15:40:16 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:37.172 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.133 Initializing NVMe Controllers 00:16:47.133 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:47.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:47.133 Initialization complete. Launching workers. 00:16:47.133 ======================================================== 00:16:47.133 Latency(us) 00:16:47.133 Device Information : IOPS MiB/s Average min max 00:16:47.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7661.08 29.93 8356.67 1230.67 9049.31 00:16:47.133 ======================================================== 00:16:47.133 Total : 7661.08 29.93 8356.67 1230.67 9049.31 00:16:47.133 00:16:47.133 15:40:26 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:47.133 15:40:26 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:47.133 15:40:26 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:47.133 15:40:26 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:47.133 15:40:26 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:16:47.133 15:40:26 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:47.133 15:40:26 -- target/tls.sh@28 -- # bdevperf_pid=2129725 00:16:47.133 15:40:26 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:47.133 15:40:26 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:47.133 15:40:26 -- target/tls.sh@31 -- # waitforlisten 2129725 /var/tmp/bdevperf.sock 00:16:47.133 15:40:26 -- common/autotest_common.sh@819 -- # '[' -z 2129725 ']' 00:16:47.133 15:40:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:47.133 15:40:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:47.133 15:40:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:47.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:47.133 15:40:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:47.133 15:40:26 -- common/autotest_common.sh@10 -- # set +x 00:16:47.133 [2024-07-10 15:40:26.479987] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:47.133 [2024-07-10 15:40:26.480066] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2129725 ] 00:16:47.133 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.391 [2024-07-10 15:40:26.538167] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:47.391 [2024-07-10 15:40:26.643269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:48.325 15:40:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:48.325 15:40:27 -- common/autotest_common.sh@852 -- # return 0 00:16:48.325 15:40:27 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:16:48.325 [2024-07-10 15:40:27.678073] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:48.583 TLSTESTn1 00:16:48.583 15:40:27 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:48.583 Running I/O for 10 seconds... 00:16:58.644 00:16:58.644 Latency(us) 00:16:58.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.644 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:58.644 Verification LBA range: start 0x0 length 0x2000 00:16:58.644 TLSTESTn1 : 10.03 2511.34 9.81 0.00 0.00 50902.71 10437.21 64468.01 00:16:58.645 =================================================================================================================== 00:16:58.645 Total : 2511.34 9.81 0.00 0.00 50902.71 10437.21 64468.01 00:16:58.645 0 00:16:58.645 15:40:37 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:58.645 15:40:37 -- target/tls.sh@45 -- # killprocess 2129725 00:16:58.645 15:40:37 -- common/autotest_common.sh@926 -- # '[' -z 2129725 ']' 00:16:58.645 15:40:37 -- common/autotest_common.sh@930 -- # kill -0 2129725 00:16:58.645 15:40:37 -- common/autotest_common.sh@931 -- # uname 00:16:58.645 15:40:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:58.645 15:40:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2129725 00:16:58.645 15:40:37 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:16:58.645 15:40:37 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:16:58.645 15:40:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2129725' 00:16:58.645 killing process with pid 2129725 00:16:58.645 15:40:37 -- common/autotest_common.sh@945 -- # kill 2129725 00:16:58.645 Received shutdown signal, test time was about 10.000000 seconds 00:16:58.645 00:16:58.645 Latency(us) 00:16:58.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.645 =================================================================================================================== 00:16:58.645 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:58.645 15:40:37 -- common/autotest_common.sh@950 -- # wait 2129725 00:16:58.902 15:40:38 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:58.902 15:40:38 -- common/autotest_common.sh@640 -- # local es=0 00:16:58.902 15:40:38 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:58.902 15:40:38 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:16:58.902 15:40:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:58.902 15:40:38 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:16:58.902 15:40:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:58.902 15:40:38 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:16:58.902 15:40:38 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:58.902 15:40:38 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:58.902 15:40:38 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:58.902 15:40:38 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:16:58.902 15:40:38 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:58.902 15:40:38 -- target/tls.sh@28 -- # bdevperf_pid=2131097 00:16:58.902 15:40:38 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:58.902 15:40:38 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:58.902 15:40:38 -- target/tls.sh@31 -- # waitforlisten 2131097 /var/tmp/bdevperf.sock 00:16:58.902 15:40:38 -- common/autotest_common.sh@819 -- # '[' -z 2131097 ']' 00:16:58.902 15:40:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:58.902 15:40:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:58.902 15:40:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:58.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:58.902 15:40:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:58.902 15:40:38 -- common/autotest_common.sh@10 -- # set +x 00:16:59.160 [2024-07-10 15:40:38.282540] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:59.160 [2024-07-10 15:40:38.282618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2131097 ] 00:16:59.160 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.160 [2024-07-10 15:40:38.339066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.160 [2024-07-10 15:40:38.441209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.095 15:40:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:00.095 15:40:39 -- common/autotest_common.sh@852 -- # return 0 00:17:00.095 15:40:39 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:17:00.095 [2024-07-10 15:40:39.431231] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:00.095 [2024-07-10 15:40:39.436924] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:00.095 [2024-07-10 15:40:39.437369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3b870 (107): Transport endpoint is not connected 00:17:00.095 [2024-07-10 15:40:39.438354] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e3b870 (9): Bad file descriptor 00:17:00.095 [2024-07-10 15:40:39.439352] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:00.095 [2024-07-10 15:40:39.439374] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:00.095 [2024-07-10 15:40:39.439395] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:00.095 request: 00:17:00.095 { 00:17:00.095 "name": "TLSTEST", 00:17:00.095 "trtype": "tcp", 00:17:00.095 "traddr": "10.0.0.2", 00:17:00.095 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:00.095 "adrfam": "ipv4", 00:17:00.095 "trsvcid": "4420", 00:17:00.095 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:00.095 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:17:00.095 "method": "bdev_nvme_attach_controller", 00:17:00.095 "req_id": 1 00:17:00.095 } 00:17:00.095 Got JSON-RPC error response 00:17:00.095 response: 00:17:00.095 { 00:17:00.095 "code": -32602, 00:17:00.095 "message": "Invalid parameters" 00:17:00.095 } 00:17:00.095 15:40:39 -- target/tls.sh@36 -- # killprocess 2131097 00:17:00.095 15:40:39 -- common/autotest_common.sh@926 -- # '[' -z 2131097 ']' 00:17:00.095 15:40:39 -- common/autotest_common.sh@930 -- # kill -0 2131097 00:17:00.095 15:40:39 -- common/autotest_common.sh@931 -- # uname 00:17:00.095 15:40:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:00.095 15:40:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2131097 00:17:00.352 15:40:39 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:00.352 15:40:39 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:00.352 15:40:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2131097' 00:17:00.352 killing process with pid 2131097 00:17:00.352 15:40:39 -- common/autotest_common.sh@945 -- # kill 2131097 00:17:00.352 Received shutdown signal, test time was about 10.000000 seconds 00:17:00.352 00:17:00.352 Latency(us) 00:17:00.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.352 =================================================================================================================== 00:17:00.352 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:00.352 15:40:39 -- common/autotest_common.sh@950 -- # wait 2131097 00:17:00.611 15:40:39 -- target/tls.sh@37 -- # return 1 00:17:00.611 15:40:39 -- common/autotest_common.sh@643 -- # es=1 00:17:00.611 15:40:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:00.611 15:40:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:00.611 15:40:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:00.611 15:40:39 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:00.611 15:40:39 -- common/autotest_common.sh@640 -- # local es=0 00:17:00.611 15:40:39 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:00.611 15:40:39 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:00.611 15:40:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:00.611 15:40:39 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:00.611 15:40:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:00.611 15:40:39 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:00.611 15:40:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:00.611 15:40:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:00.611 15:40:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:00.611 15:40:39 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:17:00.611 15:40:39 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:00.611 15:40:39 -- target/tls.sh@28 -- # bdevperf_pid=2131251 00:17:00.611 15:40:39 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:00.611 15:40:39 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:00.611 15:40:39 -- target/tls.sh@31 -- # waitforlisten 2131251 /var/tmp/bdevperf.sock 00:17:00.611 15:40:39 -- common/autotest_common.sh@819 -- # '[' -z 2131251 ']' 00:17:00.611 15:40:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.611 15:40:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:00.611 15:40:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.611 15:40:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:00.611 15:40:39 -- common/autotest_common.sh@10 -- # set +x 00:17:00.611 [2024-07-10 15:40:39.790000] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:00.611 [2024-07-10 15:40:39.790091] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2131251 ] 00:17:00.611 EAL: No free 2048 kB hugepages reported on node 1 00:17:00.611 [2024-07-10 15:40:39.846448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.611 [2024-07-10 15:40:39.948423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.544 15:40:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:01.544 15:40:40 -- common/autotest_common.sh@852 -- # return 0 00:17:01.544 15:40:40 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:01.802 [2024-07-10 15:40:41.034002] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:01.802 [2024-07-10 15:40:41.039512] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:01.802 [2024-07-10 15:40:41.039545] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:01.802 [2024-07-10 15:40:41.039648] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:01.802 [2024-07-10 15:40:41.040067] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x646870 (107): Transport endpoint is not connected 00:17:01.802 [2024-07-10 15:40:41.041054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x646870 (9): Bad file descriptor 00:17:01.802 [2024-07-10 15:40:41.042052] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:01.802 [2024-07-10 15:40:41.042074] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:01.802 [2024-07-10 15:40:41.042094] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:01.802 request: 00:17:01.802 { 00:17:01.802 "name": "TLSTEST", 00:17:01.802 "trtype": "tcp", 00:17:01.802 "traddr": "10.0.0.2", 00:17:01.802 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:01.802 "adrfam": "ipv4", 00:17:01.802 "trsvcid": "4420", 00:17:01.802 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:01.802 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:17:01.802 "method": "bdev_nvme_attach_controller", 00:17:01.802 "req_id": 1 00:17:01.802 } 00:17:01.802 Got JSON-RPC error response 00:17:01.802 response: 00:17:01.802 { 00:17:01.802 "code": -32602, 00:17:01.802 "message": "Invalid parameters" 00:17:01.802 } 00:17:01.802 15:40:41 -- target/tls.sh@36 -- # killprocess 2131251 00:17:01.802 15:40:41 -- common/autotest_common.sh@926 -- # '[' -z 2131251 ']' 00:17:01.802 15:40:41 -- common/autotest_common.sh@930 -- # kill -0 2131251 00:17:01.802 15:40:41 -- common/autotest_common.sh@931 -- # uname 00:17:01.802 15:40:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:01.802 15:40:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2131251 00:17:01.802 15:40:41 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:01.802 15:40:41 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:01.802 15:40:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2131251' 00:17:01.802 killing process with pid 2131251 00:17:01.802 15:40:41 -- common/autotest_common.sh@945 -- # kill 2131251 00:17:01.802 Received shutdown signal, test time was about 10.000000 seconds 00:17:01.802 00:17:01.802 Latency(us) 00:17:01.802 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.802 =================================================================================================================== 00:17:01.802 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:01.802 15:40:41 -- common/autotest_common.sh@950 -- # wait 2131251 00:17:02.060 15:40:41 -- target/tls.sh@37 -- # return 1 00:17:02.060 15:40:41 -- common/autotest_common.sh@643 -- # es=1 00:17:02.060 15:40:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:02.060 15:40:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:02.060 15:40:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:02.060 15:40:41 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:02.060 15:40:41 -- common/autotest_common.sh@640 -- # local es=0 00:17:02.060 15:40:41 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:02.060 15:40:41 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:02.060 15:40:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:02.060 15:40:41 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:02.060 15:40:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:02.060 15:40:41 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:02.060 15:40:41 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:02.060 15:40:41 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:02.060 15:40:41 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:02.060 15:40:41 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:17:02.060 15:40:41 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:02.060 15:40:41 -- target/tls.sh@28 -- # bdevperf_pid=2131519 00:17:02.060 15:40:41 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:02.060 15:40:41 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:02.060 15:40:41 -- target/tls.sh@31 -- # waitforlisten 2131519 /var/tmp/bdevperf.sock 00:17:02.060 15:40:41 -- common/autotest_common.sh@819 -- # '[' -z 2131519 ']' 00:17:02.060 15:40:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:02.060 15:40:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:02.060 15:40:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:02.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:02.060 15:40:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:02.060 15:40:41 -- common/autotest_common.sh@10 -- # set +x 00:17:02.060 [2024-07-10 15:40:41.382818] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:02.061 [2024-07-10 15:40:41.382899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2131519 ] 00:17:02.061 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.318 [2024-07-10 15:40:41.442346] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.318 [2024-07-10 15:40:41.545543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.250 15:40:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:03.250 15:40:42 -- common/autotest_common.sh@852 -- # return 0 00:17:03.250 15:40:42 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:17:03.250 [2024-07-10 15:40:42.576195] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:03.251 [2024-07-10 15:40:42.582877] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:03.251 [2024-07-10 15:40:42.582906] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:03.251 [2024-07-10 15:40:42.582942] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:03.251 [2024-07-10 15:40:42.583347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c870 (107): Transport endpoint is not connected 00:17:03.251 [2024-07-10 15:40:42.584326] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9c870 (9): Bad file descriptor 00:17:03.251 [2024-07-10 15:40:42.585326] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:03.251 [2024-07-10 15:40:42.585348] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:03.251 [2024-07-10 15:40:42.585368] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:03.251 request: 00:17:03.251 { 00:17:03.251 "name": "TLSTEST", 00:17:03.251 "trtype": "tcp", 00:17:03.251 "traddr": "10.0.0.2", 00:17:03.251 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:03.251 "adrfam": "ipv4", 00:17:03.251 "trsvcid": "4420", 00:17:03.251 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:03.251 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:17:03.251 "method": "bdev_nvme_attach_controller", 00:17:03.251 "req_id": 1 00:17:03.251 } 00:17:03.251 Got JSON-RPC error response 00:17:03.251 response: 00:17:03.251 { 00:17:03.251 "code": -32602, 00:17:03.251 "message": "Invalid parameters" 00:17:03.251 } 00:17:03.251 15:40:42 -- target/tls.sh@36 -- # killprocess 2131519 00:17:03.251 15:40:42 -- common/autotest_common.sh@926 -- # '[' -z 2131519 ']' 00:17:03.251 15:40:42 -- common/autotest_common.sh@930 -- # kill -0 2131519 00:17:03.251 15:40:42 -- common/autotest_common.sh@931 -- # uname 00:17:03.251 15:40:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:03.251 15:40:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2131519 00:17:03.508 15:40:42 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:03.508 15:40:42 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:03.508 15:40:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2131519' 00:17:03.508 killing process with pid 2131519 00:17:03.508 15:40:42 -- common/autotest_common.sh@945 -- # kill 2131519 00:17:03.508 Received shutdown signal, test time was about 10.000000 seconds 00:17:03.508 00:17:03.508 Latency(us) 00:17:03.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.508 =================================================================================================================== 00:17:03.508 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:03.508 15:40:42 -- common/autotest_common.sh@950 -- # wait 2131519 00:17:03.765 15:40:42 -- target/tls.sh@37 -- # return 1 00:17:03.765 15:40:42 -- common/autotest_common.sh@643 -- # es=1 00:17:03.765 15:40:42 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:03.765 15:40:42 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:03.765 15:40:42 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:03.765 15:40:42 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:03.765 15:40:42 -- common/autotest_common.sh@640 -- # local es=0 00:17:03.765 15:40:42 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:03.765 15:40:42 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:03.765 15:40:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:03.765 15:40:42 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:03.765 15:40:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:03.765 15:40:42 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:03.765 15:40:42 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:03.765 15:40:42 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:03.765 15:40:42 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:03.765 15:40:42 -- target/tls.sh@23 -- # psk= 00:17:03.765 15:40:42 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:03.765 15:40:42 -- target/tls.sh@28 -- # bdevperf_pid=2131676 00:17:03.765 15:40:42 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:03.765 15:40:42 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:03.765 15:40:42 -- target/tls.sh@31 -- # waitforlisten 2131676 /var/tmp/bdevperf.sock 00:17:03.765 15:40:42 -- common/autotest_common.sh@819 -- # '[' -z 2131676 ']' 00:17:03.765 15:40:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:03.765 15:40:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:03.765 15:40:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:03.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:03.765 15:40:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:03.765 15:40:42 -- common/autotest_common.sh@10 -- # set +x 00:17:03.765 [2024-07-10 15:40:42.932365] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:03.765 [2024-07-10 15:40:42.932450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2131676 ] 00:17:03.765 EAL: No free 2048 kB hugepages reported on node 1 00:17:03.765 [2024-07-10 15:40:42.988598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.765 [2024-07-10 15:40:43.089136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.696 15:40:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:04.696 15:40:43 -- common/autotest_common.sh@852 -- # return 0 00:17:04.696 15:40:43 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:04.954 [2024-07-10 15:40:44.076784] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:04.954 [2024-07-10 15:40:44.078872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21a7330 (9): Bad file descriptor 00:17:04.954 [2024-07-10 15:40:44.079868] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:04.954 [2024-07-10 15:40:44.079892] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:04.954 [2024-07-10 15:40:44.079913] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:04.954 request: 00:17:04.954 { 00:17:04.954 "name": "TLSTEST", 00:17:04.954 "trtype": "tcp", 00:17:04.954 "traddr": "10.0.0.2", 00:17:04.954 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:04.954 "adrfam": "ipv4", 00:17:04.954 "trsvcid": "4420", 00:17:04.954 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.954 "method": "bdev_nvme_attach_controller", 00:17:04.954 "req_id": 1 00:17:04.954 } 00:17:04.954 Got JSON-RPC error response 00:17:04.954 response: 00:17:04.954 { 00:17:04.954 "code": -32602, 00:17:04.954 "message": "Invalid parameters" 00:17:04.954 } 00:17:04.954 15:40:44 -- target/tls.sh@36 -- # killprocess 2131676 00:17:04.954 15:40:44 -- common/autotest_common.sh@926 -- # '[' -z 2131676 ']' 00:17:04.954 15:40:44 -- common/autotest_common.sh@930 -- # kill -0 2131676 00:17:04.954 15:40:44 -- common/autotest_common.sh@931 -- # uname 00:17:04.954 15:40:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:04.954 15:40:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2131676 00:17:04.954 15:40:44 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:04.954 15:40:44 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:04.954 15:40:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2131676' 00:17:04.954 killing process with pid 2131676 00:17:04.954 15:40:44 -- common/autotest_common.sh@945 -- # kill 2131676 00:17:04.954 Received shutdown signal, test time was about 10.000000 seconds 00:17:04.954 00:17:04.954 Latency(us) 00:17:04.954 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.954 =================================================================================================================== 00:17:04.954 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:04.954 15:40:44 -- common/autotest_common.sh@950 -- # wait 2131676 00:17:05.212 15:40:44 -- target/tls.sh@37 -- # return 1 00:17:05.212 15:40:44 -- common/autotest_common.sh@643 -- # es=1 00:17:05.212 15:40:44 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:05.212 15:40:44 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:05.212 15:40:44 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:05.212 15:40:44 -- target/tls.sh@167 -- # killprocess 2127632 00:17:05.212 15:40:44 -- common/autotest_common.sh@926 -- # '[' -z 2127632 ']' 00:17:05.212 15:40:44 -- common/autotest_common.sh@930 -- # kill -0 2127632 00:17:05.212 15:40:44 -- common/autotest_common.sh@931 -- # uname 00:17:05.212 15:40:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:05.212 15:40:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2127632 00:17:05.212 15:40:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:05.212 15:40:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:05.212 15:40:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2127632' 00:17:05.212 killing process with pid 2127632 00:17:05.212 15:40:44 -- common/autotest_common.sh@945 -- # kill 2127632 00:17:05.212 15:40:44 -- common/autotest_common.sh@950 -- # wait 2127632 00:17:05.470 15:40:44 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:05.470 15:40:44 -- target/tls.sh@49 -- # local key hash crc 00:17:05.470 15:40:44 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:05.470 15:40:44 -- target/tls.sh@51 -- # hash=02 00:17:05.470 15:40:44 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:05.470 15:40:44 -- target/tls.sh@52 -- # gzip -1 -c 00:17:05.470 15:40:44 -- target/tls.sh@52 -- # tail -c8 00:17:05.470 15:40:44 -- target/tls.sh@52 -- # head -c 4 00:17:05.470 15:40:44 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:05.470 15:40:44 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:05.470 15:40:44 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:05.470 15:40:44 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:05.470 15:40:44 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:05.470 15:40:44 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:05.470 15:40:44 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:05.471 15:40:44 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:05.471 15:40:44 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:05.471 15:40:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:05.471 15:40:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:05.471 15:40:44 -- common/autotest_common.sh@10 -- # set +x 00:17:05.471 15:40:44 -- nvmf/common.sh@469 -- # nvmfpid=2131962 00:17:05.471 15:40:44 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:05.471 15:40:44 -- nvmf/common.sh@470 -- # waitforlisten 2131962 00:17:05.471 15:40:44 -- common/autotest_common.sh@819 -- # '[' -z 2131962 ']' 00:17:05.471 15:40:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.471 15:40:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:05.471 15:40:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.471 15:40:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:05.471 15:40:44 -- common/autotest_common.sh@10 -- # set +x 00:17:05.471 [2024-07-10 15:40:44.760321] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:05.471 [2024-07-10 15:40:44.760405] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:05.471 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.471 [2024-07-10 15:40:44.827398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.729 [2024-07-10 15:40:44.943156] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:05.729 [2024-07-10 15:40:44.943324] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:05.729 [2024-07-10 15:40:44.943343] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:05.729 [2024-07-10 15:40:44.943357] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:05.729 [2024-07-10 15:40:44.943398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.661 15:40:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:06.661 15:40:45 -- common/autotest_common.sh@852 -- # return 0 00:17:06.661 15:40:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:06.661 15:40:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:06.661 15:40:45 -- common/autotest_common.sh@10 -- # set +x 00:17:06.661 15:40:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.661 15:40:45 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:06.661 15:40:45 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:06.661 15:40:45 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:06.918 [2024-07-10 15:40:46.060586] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:06.918 15:40:46 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:07.176 15:40:46 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:07.433 [2024-07-10 15:40:46.573969] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:07.433 [2024-07-10 15:40:46.574217] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.433 15:40:46 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:07.691 malloc0 00:17:07.691 15:40:46 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:07.949 15:40:47 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:08.208 15:40:47 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:08.208 15:40:47 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:08.208 15:40:47 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:08.208 15:40:47 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:08.208 15:40:47 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:17:08.208 15:40:47 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:08.208 15:40:47 -- target/tls.sh@28 -- # bdevperf_pid=2132266 00:17:08.208 15:40:47 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:08.208 15:40:47 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:08.208 15:40:47 -- target/tls.sh@31 -- # waitforlisten 2132266 /var/tmp/bdevperf.sock 00:17:08.208 15:40:47 -- common/autotest_common.sh@819 -- # '[' -z 2132266 ']' 00:17:08.208 15:40:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:08.208 15:40:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:08.208 15:40:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:08.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:08.208 15:40:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:08.208 15:40:47 -- common/autotest_common.sh@10 -- # set +x 00:17:08.208 [2024-07-10 15:40:47.446322] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:08.208 [2024-07-10 15:40:47.446392] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2132266 ] 00:17:08.208 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.208 [2024-07-10 15:40:47.502503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.466 [2024-07-10 15:40:47.606384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.031 15:40:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:09.031 15:40:48 -- common/autotest_common.sh@852 -- # return 0 00:17:09.031 15:40:48 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:09.288 [2024-07-10 15:40:48.571949] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:09.288 TLSTESTn1 00:17:09.288 15:40:48 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:09.546 Running I/O for 10 seconds... 00:17:19.515 00:17:19.515 Latency(us) 00:17:19.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.515 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:19.515 Verification LBA range: start 0x0 length 0x2000 00:17:19.515 TLSTESTn1 : 10.03 2441.50 9.54 0.00 0.00 52356.63 7621.59 57089.14 00:17:19.515 =================================================================================================================== 00:17:19.515 Total : 2441.50 9.54 0.00 0.00 52356.63 7621.59 57089.14 00:17:19.515 0 00:17:19.515 15:40:58 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:19.515 15:40:58 -- target/tls.sh@45 -- # killprocess 2132266 00:17:19.515 15:40:58 -- common/autotest_common.sh@926 -- # '[' -z 2132266 ']' 00:17:19.515 15:40:58 -- common/autotest_common.sh@930 -- # kill -0 2132266 00:17:19.515 15:40:58 -- common/autotest_common.sh@931 -- # uname 00:17:19.515 15:40:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:19.515 15:40:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2132266 00:17:19.515 15:40:58 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:19.515 15:40:58 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:19.515 15:40:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2132266' 00:17:19.515 killing process with pid 2132266 00:17:19.515 15:40:58 -- common/autotest_common.sh@945 -- # kill 2132266 00:17:19.515 Received shutdown signal, test time was about 10.000000 seconds 00:17:19.515 00:17:19.515 Latency(us) 00:17:19.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:19.515 =================================================================================================================== 00:17:19.515 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:19.515 15:40:58 -- common/autotest_common.sh@950 -- # wait 2132266 00:17:19.774 15:40:59 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:19.774 15:40:59 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:19.774 15:40:59 -- common/autotest_common.sh@640 -- # local es=0 00:17:19.774 15:40:59 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:19.774 15:40:59 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:19.774 15:40:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:19.774 15:40:59 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:19.774 15:40:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:19.774 15:40:59 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:19.774 15:40:59 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:19.774 15:40:59 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:19.774 15:40:59 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:19.774 15:40:59 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:17:19.774 15:40:59 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:19.774 15:40:59 -- target/tls.sh@28 -- # bdevperf_pid=2133639 00:17:19.774 15:40:59 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:19.774 15:40:59 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:19.774 15:40:59 -- target/tls.sh@31 -- # waitforlisten 2133639 /var/tmp/bdevperf.sock 00:17:19.774 15:40:59 -- common/autotest_common.sh@819 -- # '[' -z 2133639 ']' 00:17:19.774 15:40:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:19.774 15:40:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:19.774 15:40:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:19.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:19.774 15:40:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:19.774 15:40:59 -- common/autotest_common.sh@10 -- # set +x 00:17:20.032 [2024-07-10 15:40:59.159016] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:20.032 [2024-07-10 15:40:59.159094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2133639 ] 00:17:20.032 EAL: No free 2048 kB hugepages reported on node 1 00:17:20.032 [2024-07-10 15:40:59.220208] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.032 [2024-07-10 15:40:59.323799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.966 15:41:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:20.966 15:41:00 -- common/autotest_common.sh@852 -- # return 0 00:17:20.966 15:41:00 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:21.224 [2024-07-10 15:41:00.348666] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:21.224 [2024-07-10 15:41:00.348720] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:21.224 request: 00:17:21.224 { 00:17:21.224 "name": "TLSTEST", 00:17:21.224 "trtype": "tcp", 00:17:21.224 "traddr": "10.0.0.2", 00:17:21.224 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.224 "adrfam": "ipv4", 00:17:21.224 "trsvcid": "4420", 00:17:21.224 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.224 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:21.224 "method": "bdev_nvme_attach_controller", 00:17:21.224 "req_id": 1 00:17:21.224 } 00:17:21.224 Got JSON-RPC error response 00:17:21.224 response: 00:17:21.224 { 00:17:21.224 "code": -22, 00:17:21.224 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:17:21.224 } 00:17:21.224 15:41:00 -- target/tls.sh@36 -- # killprocess 2133639 00:17:21.224 15:41:00 -- common/autotest_common.sh@926 -- # '[' -z 2133639 ']' 00:17:21.224 15:41:00 -- common/autotest_common.sh@930 -- # kill -0 2133639 00:17:21.224 15:41:00 -- common/autotest_common.sh@931 -- # uname 00:17:21.224 15:41:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:21.224 15:41:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2133639 00:17:21.224 15:41:00 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:21.224 15:41:00 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:21.224 15:41:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2133639' 00:17:21.224 killing process with pid 2133639 00:17:21.224 15:41:00 -- common/autotest_common.sh@945 -- # kill 2133639 00:17:21.224 Received shutdown signal, test time was about 10.000000 seconds 00:17:21.224 00:17:21.224 Latency(us) 00:17:21.224 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.224 =================================================================================================================== 00:17:21.224 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:21.224 15:41:00 -- common/autotest_common.sh@950 -- # wait 2133639 00:17:21.483 15:41:00 -- target/tls.sh@37 -- # return 1 00:17:21.483 15:41:00 -- common/autotest_common.sh@643 -- # es=1 00:17:21.483 15:41:00 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:21.483 15:41:00 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:21.483 15:41:00 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:21.483 15:41:00 -- target/tls.sh@183 -- # killprocess 2131962 00:17:21.483 15:41:00 -- common/autotest_common.sh@926 -- # '[' -z 2131962 ']' 00:17:21.483 15:41:00 -- common/autotest_common.sh@930 -- # kill -0 2131962 00:17:21.483 15:41:00 -- common/autotest_common.sh@931 -- # uname 00:17:21.483 15:41:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:21.483 15:41:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2131962 00:17:21.483 15:41:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:21.483 15:41:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:21.483 15:41:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2131962' 00:17:21.483 killing process with pid 2131962 00:17:21.483 15:41:00 -- common/autotest_common.sh@945 -- # kill 2131962 00:17:21.483 15:41:00 -- common/autotest_common.sh@950 -- # wait 2131962 00:17:21.741 15:41:00 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:21.741 15:41:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:21.741 15:41:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:21.741 15:41:00 -- common/autotest_common.sh@10 -- # set +x 00:17:21.741 15:41:00 -- nvmf/common.sh@469 -- # nvmfpid=2133922 00:17:21.741 15:41:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:21.741 15:41:00 -- nvmf/common.sh@470 -- # waitforlisten 2133922 00:17:21.741 15:41:00 -- common/autotest_common.sh@819 -- # '[' -z 2133922 ']' 00:17:21.741 15:41:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.741 15:41:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:21.741 15:41:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.741 15:41:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:21.741 15:41:00 -- common/autotest_common.sh@10 -- # set +x 00:17:21.741 [2024-07-10 15:41:01.011907] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:21.741 [2024-07-10 15:41:01.011984] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.741 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.741 [2024-07-10 15:41:01.075820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.999 [2024-07-10 15:41:01.185372] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:21.999 [2024-07-10 15:41:01.185575] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.999 [2024-07-10 15:41:01.185594] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.999 [2024-07-10 15:41:01.185607] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.999 [2024-07-10 15:41:01.185645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:22.932 15:41:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:22.932 15:41:01 -- common/autotest_common.sh@852 -- # return 0 00:17:22.932 15:41:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:22.932 15:41:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:22.932 15:41:01 -- common/autotest_common.sh@10 -- # set +x 00:17:22.932 15:41:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.932 15:41:02 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:22.932 15:41:02 -- common/autotest_common.sh@640 -- # local es=0 00:17:22.932 15:41:02 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:22.932 15:41:02 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:17:22.932 15:41:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:22.932 15:41:02 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:17:22.932 15:41:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:22.932 15:41:02 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:22.932 15:41:02 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:22.932 15:41:02 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:22.932 [2024-07-10 15:41:02.237858] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.932 15:41:02 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:23.190 15:41:02 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:23.447 [2024-07-10 15:41:02.747219] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:23.447 [2024-07-10 15:41:02.747457] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.447 15:41:02 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:23.704 malloc0 00:17:23.704 15:41:03 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:23.960 15:41:03 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:24.218 [2024-07-10 15:41:03.517537] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:24.218 [2024-07-10 15:41:03.517575] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:24.218 [2024-07-10 15:41:03.517595] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:24.218 request: 00:17:24.218 { 00:17:24.218 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.218 "host": "nqn.2016-06.io.spdk:host1", 00:17:24.218 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:24.218 "method": "nvmf_subsystem_add_host", 00:17:24.218 "req_id": 1 00:17:24.218 } 00:17:24.218 Got JSON-RPC error response 00:17:24.218 response: 00:17:24.218 { 00:17:24.218 "code": -32603, 00:17:24.218 "message": "Internal error" 00:17:24.218 } 00:17:24.218 15:41:03 -- common/autotest_common.sh@643 -- # es=1 00:17:24.218 15:41:03 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:24.218 15:41:03 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:24.218 15:41:03 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:24.218 15:41:03 -- target/tls.sh@189 -- # killprocess 2133922 00:17:24.218 15:41:03 -- common/autotest_common.sh@926 -- # '[' -z 2133922 ']' 00:17:24.218 15:41:03 -- common/autotest_common.sh@930 -- # kill -0 2133922 00:17:24.218 15:41:03 -- common/autotest_common.sh@931 -- # uname 00:17:24.218 15:41:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:24.218 15:41:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2133922 00:17:24.218 15:41:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:24.218 15:41:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:24.218 15:41:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2133922' 00:17:24.218 killing process with pid 2133922 00:17:24.218 15:41:03 -- common/autotest_common.sh@945 -- # kill 2133922 00:17:24.218 15:41:03 -- common/autotest_common.sh@950 -- # wait 2133922 00:17:24.476 15:41:03 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:24.476 15:41:03 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:24.476 15:41:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:24.476 15:41:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:24.476 15:41:03 -- common/autotest_common.sh@10 -- # set +x 00:17:24.733 15:41:03 -- nvmf/common.sh@469 -- # nvmfpid=2134350 00:17:24.733 15:41:03 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:24.733 15:41:03 -- nvmf/common.sh@470 -- # waitforlisten 2134350 00:17:24.733 15:41:03 -- common/autotest_common.sh@819 -- # '[' -z 2134350 ']' 00:17:24.733 15:41:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.733 15:41:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:24.733 15:41:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.733 15:41:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:24.733 15:41:03 -- common/autotest_common.sh@10 -- # set +x 00:17:24.733 [2024-07-10 15:41:03.897855] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:24.733 [2024-07-10 15:41:03.897945] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:24.733 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.733 [2024-07-10 15:41:03.966239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.733 [2024-07-10 15:41:04.079354] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:24.733 [2024-07-10 15:41:04.079547] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:24.733 [2024-07-10 15:41:04.079568] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:24.733 [2024-07-10 15:41:04.079582] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:24.733 [2024-07-10 15:41:04.079615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.665 15:41:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:25.665 15:41:04 -- common/autotest_common.sh@852 -- # return 0 00:17:25.665 15:41:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:25.665 15:41:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:25.665 15:41:04 -- common/autotest_common.sh@10 -- # set +x 00:17:25.665 15:41:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.665 15:41:04 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:25.665 15:41:04 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:25.665 15:41:04 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:25.922 [2024-07-10 15:41:05.064000] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:25.922 15:41:05 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:26.180 15:41:05 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:26.438 [2024-07-10 15:41:05.621578] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:26.438 [2024-07-10 15:41:05.621840] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.438 15:41:05 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:26.696 malloc0 00:17:26.696 15:41:05 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:26.954 15:41:06 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:27.212 15:41:06 -- target/tls.sh@197 -- # bdevperf_pid=2134652 00:17:27.212 15:41:06 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:27.212 15:41:06 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:27.212 15:41:06 -- target/tls.sh@200 -- # waitforlisten 2134652 /var/tmp/bdevperf.sock 00:17:27.212 15:41:06 -- common/autotest_common.sh@819 -- # '[' -z 2134652 ']' 00:17:27.212 15:41:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:27.212 15:41:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:27.212 15:41:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:27.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:27.212 15:41:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:27.212 15:41:06 -- common/autotest_common.sh@10 -- # set +x 00:17:27.212 [2024-07-10 15:41:06.463116] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:27.212 [2024-07-10 15:41:06.463186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2134652 ] 00:17:27.212 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.212 [2024-07-10 15:41:06.523888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.469 [2024-07-10 15:41:06.631666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.401 15:41:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:28.401 15:41:07 -- common/autotest_common.sh@852 -- # return 0 00:17:28.401 15:41:07 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:28.401 [2024-07-10 15:41:07.671441] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:28.401 TLSTESTn1 00:17:28.401 15:41:07 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:17:28.966 15:41:08 -- target/tls.sh@205 -- # tgtconf='{ 00:17:28.966 "subsystems": [ 00:17:28.966 { 00:17:28.966 "subsystem": "iobuf", 00:17:28.966 "config": [ 00:17:28.966 { 00:17:28.966 "method": "iobuf_set_options", 00:17:28.966 "params": { 00:17:28.966 "small_pool_count": 8192, 00:17:28.966 "large_pool_count": 1024, 00:17:28.966 "small_bufsize": 8192, 00:17:28.966 "large_bufsize": 135168 00:17:28.966 } 00:17:28.966 } 00:17:28.966 ] 00:17:28.966 }, 00:17:28.966 { 00:17:28.966 "subsystem": "sock", 00:17:28.966 "config": [ 00:17:28.966 { 00:17:28.966 "method": "sock_impl_set_options", 00:17:28.966 "params": { 00:17:28.966 "impl_name": "posix", 00:17:28.966 "recv_buf_size": 2097152, 00:17:28.966 "send_buf_size": 2097152, 00:17:28.966 "enable_recv_pipe": true, 00:17:28.966 "enable_quickack": false, 00:17:28.966 "enable_placement_id": 0, 00:17:28.966 "enable_zerocopy_send_server": true, 00:17:28.966 "enable_zerocopy_send_client": false, 00:17:28.966 "zerocopy_threshold": 0, 00:17:28.966 "tls_version": 0, 00:17:28.966 "enable_ktls": false 00:17:28.966 } 00:17:28.966 }, 00:17:28.966 { 00:17:28.966 "method": "sock_impl_set_options", 00:17:28.966 "params": { 00:17:28.966 "impl_name": "ssl", 00:17:28.966 "recv_buf_size": 4096, 00:17:28.966 "send_buf_size": 4096, 00:17:28.966 "enable_recv_pipe": true, 00:17:28.966 "enable_quickack": false, 00:17:28.966 "enable_placement_id": 0, 00:17:28.966 "enable_zerocopy_send_server": true, 00:17:28.966 "enable_zerocopy_send_client": false, 00:17:28.966 "zerocopy_threshold": 0, 00:17:28.966 "tls_version": 0, 00:17:28.966 "enable_ktls": false 00:17:28.966 } 00:17:28.966 } 00:17:28.966 ] 00:17:28.966 }, 00:17:28.966 { 00:17:28.966 "subsystem": "vmd", 00:17:28.966 "config": [] 00:17:28.966 }, 00:17:28.966 { 00:17:28.966 "subsystem": "accel", 00:17:28.966 "config": [ 00:17:28.967 { 00:17:28.967 "method": "accel_set_options", 00:17:28.967 "params": { 00:17:28.967 "small_cache_size": 128, 00:17:28.967 "large_cache_size": 16, 00:17:28.967 "task_count": 2048, 00:17:28.967 "sequence_count": 2048, 00:17:28.967 "buf_count": 2048 00:17:28.967 } 00:17:28.967 } 00:17:28.967 ] 00:17:28.967 }, 00:17:28.967 { 00:17:28.967 "subsystem": "bdev", 00:17:28.967 "config": [ 00:17:28.967 { 00:17:28.967 "method": "bdev_set_options", 00:17:28.967 "params": { 00:17:28.967 "bdev_io_pool_size": 65535, 00:17:28.967 "bdev_io_cache_size": 256, 00:17:28.967 "bdev_auto_examine": true, 00:17:28.967 "iobuf_small_cache_size": 128, 00:17:28.967 "iobuf_large_cache_size": 16 00:17:28.967 } 00:17:28.967 }, 00:17:28.967 { 00:17:28.967 "method": "bdev_raid_set_options", 00:17:28.967 "params": { 00:17:28.967 "process_window_size_kb": 1024 00:17:28.967 } 00:17:28.967 }, 00:17:28.967 { 00:17:28.967 "method": "bdev_iscsi_set_options", 00:17:28.967 "params": { 00:17:28.967 "timeout_sec": 30 00:17:28.967 } 00:17:28.967 }, 00:17:28.967 { 00:17:28.967 "method": "bdev_nvme_set_options", 00:17:28.967 "params": { 00:17:28.967 "action_on_timeout": "none", 00:17:28.967 "timeout_us": 0, 00:17:28.967 "timeout_admin_us": 0, 00:17:28.967 "keep_alive_timeout_ms": 10000, 00:17:28.967 "transport_retry_count": 4, 00:17:28.967 "arbitration_burst": 0, 00:17:28.967 "low_priority_weight": 0, 00:17:28.967 "medium_priority_weight": 0, 00:17:28.967 "high_priority_weight": 0, 00:17:28.967 "nvme_adminq_poll_period_us": 10000, 00:17:28.967 "nvme_ioq_poll_period_us": 0, 00:17:28.967 "io_queue_requests": 0, 00:17:28.967 "delay_cmd_submit": true, 00:17:28.967 "bdev_retry_count": 3, 00:17:28.967 "transport_ack_timeout": 0, 00:17:28.967 "ctrlr_loss_timeout_sec": 0, 00:17:28.967 "reconnect_delay_sec": 0, 00:17:28.967 "fast_io_fail_timeout_sec": 0, 00:17:28.967 "generate_uuids": false, 00:17:28.967 "transport_tos": 0, 00:17:28.967 "io_path_stat": false, 00:17:28.967 "allow_accel_sequence": false 00:17:28.967 } 00:17:28.967 }, 00:17:28.967 { 00:17:28.967 "method": "bdev_nvme_set_hotplug", 00:17:28.967 "params": { 00:17:28.967 "period_us": 100000, 00:17:28.967 "enable": false 00:17:28.967 } 00:17:28.967 }, 00:17:28.967 { 00:17:28.967 "method": "bdev_malloc_create", 00:17:28.967 "params": { 00:17:28.967 "name": "malloc0", 00:17:28.967 "num_blocks": 8192, 00:17:28.967 "block_size": 4096, 00:17:28.967 "physical_block_size": 4096, 00:17:28.967 "uuid": "e9562178-d5dd-46e0-816c-c383990935ee", 00:17:28.967 "optimal_io_boundary": 0 00:17:28.967 } 00:17:28.967 }, 00:17:28.967 { 00:17:28.967 "method": "bdev_wait_for_examine" 00:17:28.967 } 00:17:28.967 ] 00:17:28.967 }, 00:17:28.967 { 00:17:28.967 "subsystem": "nbd", 00:17:28.967 "config": [] 00:17:28.967 }, 00:17:28.967 { 00:17:28.967 "subsystem": "scheduler", 00:17:28.967 "config": [ 00:17:28.967 { 00:17:28.967 "method": "framework_set_scheduler", 00:17:28.967 "params": { 00:17:28.967 "name": "static" 00:17:28.967 } 00:17:28.967 } 00:17:28.967 ] 00:17:28.967 }, 00:17:28.967 { 00:17:28.967 "subsystem": "nvmf", 00:17:28.967 "config": [ 00:17:28.967 { 00:17:28.967 "method": "nvmf_set_config", 00:17:28.967 "params": { 00:17:28.967 "discovery_filter": "match_any", 00:17:28.967 "admin_cmd_passthru": { 00:17:28.967 "identify_ctrlr": false 00:17:28.967 } 00:17:28.967 } 00:17:28.967 }, 00:17:28.967 { 00:17:28.967 "method": "nvmf_set_max_subsystems", 00:17:28.967 "params": { 00:17:28.967 "max_subsystems": 1024 00:17:28.967 } 00:17:28.967 }, 00:17:28.967 { 00:17:28.967 "method": "nvmf_set_crdt", 00:17:28.967 "params": { 00:17:28.967 "crdt1": 0, 00:17:28.967 "crdt2": 0, 00:17:28.967 "crdt3": 0 00:17:28.967 } 00:17:28.967 }, 00:17:28.967 { 00:17:28.967 "method": "nvmf_create_transport", 00:17:28.967 "params": { 00:17:28.967 "trtype": "TCP", 00:17:28.967 "max_queue_depth": 128, 00:17:28.967 "max_io_qpairs_per_ctrlr": 127, 00:17:28.967 "in_capsule_data_size": 4096, 00:17:28.967 "max_io_size": 131072, 00:17:28.967 "io_unit_size": 131072, 00:17:28.967 "max_aq_depth": 128, 00:17:28.967 "num_shared_buffers": 511, 00:17:28.967 "buf_cache_size": 4294967295, 00:17:28.967 "dif_insert_or_strip": false, 00:17:28.967 "zcopy": false, 00:17:28.967 "c2h_success": false, 00:17:28.967 "sock_priority": 0, 00:17:28.967 "abort_timeout_sec": 1 00:17:28.967 } 00:17:28.967 }, 00:17:28.967 { 00:17:28.967 "method": "nvmf_create_subsystem", 00:17:28.967 "params": { 00:17:28.967 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.967 "allow_any_host": false, 00:17:28.967 "serial_number": "SPDK00000000000001", 00:17:28.967 "model_number": "SPDK bdev Controller", 00:17:28.967 "max_namespaces": 10, 00:17:28.967 "min_cntlid": 1, 00:17:28.967 "max_cntlid": 65519, 00:17:28.967 "ana_reporting": false 00:17:28.967 } 00:17:28.967 }, 00:17:28.967 { 00:17:28.967 "method": "nvmf_subsystem_add_host", 00:17:28.967 "params": { 00:17:28.967 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.967 "host": "nqn.2016-06.io.spdk:host1", 00:17:28.967 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:17:28.967 } 00:17:28.967 }, 00:17:28.967 { 00:17:28.967 "method": "nvmf_subsystem_add_ns", 00:17:28.967 "params": { 00:17:28.967 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.967 "namespace": { 00:17:28.967 "nsid": 1, 00:17:28.967 "bdev_name": "malloc0", 00:17:28.967 "nguid": "E9562178D5DD46E0816CC383990935EE", 00:17:28.967 "uuid": "e9562178-d5dd-46e0-816c-c383990935ee" 00:17:28.967 } 00:17:28.967 } 00:17:28.967 }, 00:17:28.967 { 00:17:28.967 "method": "nvmf_subsystem_add_listener", 00:17:28.967 "params": { 00:17:28.967 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.967 "listen_address": { 00:17:28.967 "trtype": "TCP", 00:17:28.967 "adrfam": "IPv4", 00:17:28.967 "traddr": "10.0.0.2", 00:17:28.967 "trsvcid": "4420" 00:17:28.967 }, 00:17:28.967 "secure_channel": true 00:17:28.967 } 00:17:28.967 } 00:17:28.967 ] 00:17:28.967 } 00:17:28.967 ] 00:17:28.967 }' 00:17:28.967 15:41:08 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:29.225 15:41:08 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:29.225 "subsystems": [ 00:17:29.225 { 00:17:29.225 "subsystem": "iobuf", 00:17:29.225 "config": [ 00:17:29.225 { 00:17:29.225 "method": "iobuf_set_options", 00:17:29.225 "params": { 00:17:29.225 "small_pool_count": 8192, 00:17:29.225 "large_pool_count": 1024, 00:17:29.225 "small_bufsize": 8192, 00:17:29.225 "large_bufsize": 135168 00:17:29.225 } 00:17:29.225 } 00:17:29.225 ] 00:17:29.225 }, 00:17:29.225 { 00:17:29.225 "subsystem": "sock", 00:17:29.225 "config": [ 00:17:29.225 { 00:17:29.225 "method": "sock_impl_set_options", 00:17:29.225 "params": { 00:17:29.225 "impl_name": "posix", 00:17:29.225 "recv_buf_size": 2097152, 00:17:29.225 "send_buf_size": 2097152, 00:17:29.225 "enable_recv_pipe": true, 00:17:29.225 "enable_quickack": false, 00:17:29.225 "enable_placement_id": 0, 00:17:29.225 "enable_zerocopy_send_server": true, 00:17:29.225 "enable_zerocopy_send_client": false, 00:17:29.225 "zerocopy_threshold": 0, 00:17:29.225 "tls_version": 0, 00:17:29.225 "enable_ktls": false 00:17:29.225 } 00:17:29.225 }, 00:17:29.225 { 00:17:29.225 "method": "sock_impl_set_options", 00:17:29.225 "params": { 00:17:29.225 "impl_name": "ssl", 00:17:29.225 "recv_buf_size": 4096, 00:17:29.225 "send_buf_size": 4096, 00:17:29.225 "enable_recv_pipe": true, 00:17:29.225 "enable_quickack": false, 00:17:29.225 "enable_placement_id": 0, 00:17:29.225 "enable_zerocopy_send_server": true, 00:17:29.225 "enable_zerocopy_send_client": false, 00:17:29.225 "zerocopy_threshold": 0, 00:17:29.225 "tls_version": 0, 00:17:29.225 "enable_ktls": false 00:17:29.225 } 00:17:29.225 } 00:17:29.225 ] 00:17:29.225 }, 00:17:29.225 { 00:17:29.225 "subsystem": "vmd", 00:17:29.225 "config": [] 00:17:29.225 }, 00:17:29.225 { 00:17:29.225 "subsystem": "accel", 00:17:29.225 "config": [ 00:17:29.225 { 00:17:29.225 "method": "accel_set_options", 00:17:29.225 "params": { 00:17:29.226 "small_cache_size": 128, 00:17:29.226 "large_cache_size": 16, 00:17:29.226 "task_count": 2048, 00:17:29.226 "sequence_count": 2048, 00:17:29.226 "buf_count": 2048 00:17:29.226 } 00:17:29.226 } 00:17:29.226 ] 00:17:29.226 }, 00:17:29.226 { 00:17:29.226 "subsystem": "bdev", 00:17:29.226 "config": [ 00:17:29.226 { 00:17:29.226 "method": "bdev_set_options", 00:17:29.226 "params": { 00:17:29.226 "bdev_io_pool_size": 65535, 00:17:29.226 "bdev_io_cache_size": 256, 00:17:29.226 "bdev_auto_examine": true, 00:17:29.226 "iobuf_small_cache_size": 128, 00:17:29.226 "iobuf_large_cache_size": 16 00:17:29.226 } 00:17:29.226 }, 00:17:29.226 { 00:17:29.226 "method": "bdev_raid_set_options", 00:17:29.226 "params": { 00:17:29.226 "process_window_size_kb": 1024 00:17:29.226 } 00:17:29.226 }, 00:17:29.226 { 00:17:29.226 "method": "bdev_iscsi_set_options", 00:17:29.226 "params": { 00:17:29.226 "timeout_sec": 30 00:17:29.226 } 00:17:29.226 }, 00:17:29.226 { 00:17:29.226 "method": "bdev_nvme_set_options", 00:17:29.226 "params": { 00:17:29.226 "action_on_timeout": "none", 00:17:29.226 "timeout_us": 0, 00:17:29.226 "timeout_admin_us": 0, 00:17:29.226 "keep_alive_timeout_ms": 10000, 00:17:29.226 "transport_retry_count": 4, 00:17:29.226 "arbitration_burst": 0, 00:17:29.226 "low_priority_weight": 0, 00:17:29.226 "medium_priority_weight": 0, 00:17:29.226 "high_priority_weight": 0, 00:17:29.226 "nvme_adminq_poll_period_us": 10000, 00:17:29.226 "nvme_ioq_poll_period_us": 0, 00:17:29.226 "io_queue_requests": 512, 00:17:29.226 "delay_cmd_submit": true, 00:17:29.226 "bdev_retry_count": 3, 00:17:29.226 "transport_ack_timeout": 0, 00:17:29.226 "ctrlr_loss_timeout_sec": 0, 00:17:29.226 "reconnect_delay_sec": 0, 00:17:29.226 "fast_io_fail_timeout_sec": 0, 00:17:29.226 "generate_uuids": false, 00:17:29.226 "transport_tos": 0, 00:17:29.226 "io_path_stat": false, 00:17:29.226 "allow_accel_sequence": false 00:17:29.226 } 00:17:29.226 }, 00:17:29.226 { 00:17:29.226 "method": "bdev_nvme_attach_controller", 00:17:29.226 "params": { 00:17:29.226 "name": "TLSTEST", 00:17:29.226 "trtype": "TCP", 00:17:29.226 "adrfam": "IPv4", 00:17:29.226 "traddr": "10.0.0.2", 00:17:29.226 "trsvcid": "4420", 00:17:29.226 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.226 "prchk_reftag": false, 00:17:29.226 "prchk_guard": false, 00:17:29.226 "ctrlr_loss_timeout_sec": 0, 00:17:29.226 "reconnect_delay_sec": 0, 00:17:29.226 "fast_io_fail_timeout_sec": 0, 00:17:29.226 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:29.226 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:29.226 "hdgst": false, 00:17:29.226 "ddgst": false 00:17:29.226 } 00:17:29.226 }, 00:17:29.226 { 00:17:29.226 "method": "bdev_nvme_set_hotplug", 00:17:29.226 "params": { 00:17:29.226 "period_us": 100000, 00:17:29.226 "enable": false 00:17:29.226 } 00:17:29.226 }, 00:17:29.226 { 00:17:29.226 "method": "bdev_wait_for_examine" 00:17:29.226 } 00:17:29.226 ] 00:17:29.226 }, 00:17:29.226 { 00:17:29.226 "subsystem": "nbd", 00:17:29.226 "config": [] 00:17:29.226 } 00:17:29.226 ] 00:17:29.226 }' 00:17:29.226 15:41:08 -- target/tls.sh@208 -- # killprocess 2134652 00:17:29.226 15:41:08 -- common/autotest_common.sh@926 -- # '[' -z 2134652 ']' 00:17:29.226 15:41:08 -- common/autotest_common.sh@930 -- # kill -0 2134652 00:17:29.226 15:41:08 -- common/autotest_common.sh@931 -- # uname 00:17:29.226 15:41:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:29.226 15:41:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2134652 00:17:29.226 15:41:08 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:29.226 15:41:08 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:29.226 15:41:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2134652' 00:17:29.226 killing process with pid 2134652 00:17:29.226 15:41:08 -- common/autotest_common.sh@945 -- # kill 2134652 00:17:29.226 Received shutdown signal, test time was about 10.000000 seconds 00:17:29.226 00:17:29.226 Latency(us) 00:17:29.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.226 =================================================================================================================== 00:17:29.226 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:29.226 15:41:08 -- common/autotest_common.sh@950 -- # wait 2134652 00:17:29.484 15:41:08 -- target/tls.sh@209 -- # killprocess 2134350 00:17:29.484 15:41:08 -- common/autotest_common.sh@926 -- # '[' -z 2134350 ']' 00:17:29.484 15:41:08 -- common/autotest_common.sh@930 -- # kill -0 2134350 00:17:29.484 15:41:08 -- common/autotest_common.sh@931 -- # uname 00:17:29.484 15:41:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:29.484 15:41:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2134350 00:17:29.484 15:41:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:29.484 15:41:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:29.484 15:41:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2134350' 00:17:29.484 killing process with pid 2134350 00:17:29.484 15:41:08 -- common/autotest_common.sh@945 -- # kill 2134350 00:17:29.484 15:41:08 -- common/autotest_common.sh@950 -- # wait 2134350 00:17:29.743 15:41:08 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:29.743 15:41:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:29.743 15:41:08 -- target/tls.sh@212 -- # echo '{ 00:17:29.743 "subsystems": [ 00:17:29.743 { 00:17:29.743 "subsystem": "iobuf", 00:17:29.743 "config": [ 00:17:29.743 { 00:17:29.743 "method": "iobuf_set_options", 00:17:29.743 "params": { 00:17:29.743 "small_pool_count": 8192, 00:17:29.743 "large_pool_count": 1024, 00:17:29.743 "small_bufsize": 8192, 00:17:29.743 "large_bufsize": 135168 00:17:29.743 } 00:17:29.743 } 00:17:29.743 ] 00:17:29.743 }, 00:17:29.743 { 00:17:29.743 "subsystem": "sock", 00:17:29.743 "config": [ 00:17:29.743 { 00:17:29.743 "method": "sock_impl_set_options", 00:17:29.743 "params": { 00:17:29.743 "impl_name": "posix", 00:17:29.743 "recv_buf_size": 2097152, 00:17:29.743 "send_buf_size": 2097152, 00:17:29.743 "enable_recv_pipe": true, 00:17:29.743 "enable_quickack": false, 00:17:29.743 "enable_placement_id": 0, 00:17:29.743 "enable_zerocopy_send_server": true, 00:17:29.743 "enable_zerocopy_send_client": false, 00:17:29.743 "zerocopy_threshold": 0, 00:17:29.743 "tls_version": 0, 00:17:29.743 "enable_ktls": false 00:17:29.743 } 00:17:29.743 }, 00:17:29.743 { 00:17:29.743 "method": "sock_impl_set_options", 00:17:29.743 "params": { 00:17:29.743 "impl_name": "ssl", 00:17:29.743 "recv_buf_size": 4096, 00:17:29.743 "send_buf_size": 4096, 00:17:29.743 "enable_recv_pipe": true, 00:17:29.743 "enable_quickack": false, 00:17:29.743 "enable_placement_id": 0, 00:17:29.743 "enable_zerocopy_send_server": true, 00:17:29.743 "enable_zerocopy_send_client": false, 00:17:29.743 "zerocopy_threshold": 0, 00:17:29.743 "tls_version": 0, 00:17:29.743 "enable_ktls": false 00:17:29.743 } 00:17:29.743 } 00:17:29.743 ] 00:17:29.743 }, 00:17:29.743 { 00:17:29.743 "subsystem": "vmd", 00:17:29.743 "config": [] 00:17:29.743 }, 00:17:29.743 { 00:17:29.743 "subsystem": "accel", 00:17:29.743 "config": [ 00:17:29.743 { 00:17:29.743 "method": "accel_set_options", 00:17:29.743 "params": { 00:17:29.743 "small_cache_size": 128, 00:17:29.743 "large_cache_size": 16, 00:17:29.743 "task_count": 2048, 00:17:29.743 "sequence_count": 2048, 00:17:29.743 "buf_count": 2048 00:17:29.743 } 00:17:29.743 } 00:17:29.743 ] 00:17:29.743 }, 00:17:29.743 { 00:17:29.743 "subsystem": "bdev", 00:17:29.743 "config": [ 00:17:29.743 { 00:17:29.743 "method": "bdev_set_options", 00:17:29.743 "params": { 00:17:29.743 "bdev_io_pool_size": 65535, 00:17:29.743 "bdev_io_cache_size": 256, 00:17:29.743 "bdev_auto_examine": true, 00:17:29.743 "iobuf_small_cache_size": 128, 00:17:29.743 "iobuf_large_cache_size": 16 00:17:29.743 } 00:17:29.743 }, 00:17:29.743 { 00:17:29.743 "method": "bdev_raid_set_options", 00:17:29.743 "params": { 00:17:29.743 "process_window_size_kb": 1024 00:17:29.743 } 00:17:29.743 }, 00:17:29.743 { 00:17:29.743 "method": "bdev_iscsi_set_options", 00:17:29.743 "params": { 00:17:29.743 "timeout_sec": 30 00:17:29.743 } 00:17:29.743 }, 00:17:29.743 { 00:17:29.743 "method": "bdev_nvme_set_options", 00:17:29.743 "params": { 00:17:29.743 "action_on_timeout": "none", 00:17:29.743 "timeout_us": 0, 00:17:29.743 "timeout_admin_us": 0, 00:17:29.743 "keep_alive_timeout_ms": 10000, 00:17:29.743 "transport_retry_count": 4, 00:17:29.743 "arbitration_burst": 0, 00:17:29.743 "low_priority_weight": 0, 00:17:29.743 "medium_priority_weight": 0, 00:17:29.743 "high_priority_weight": 0, 00:17:29.743 "nvme_adminq_poll_period_us": 10000, 00:17:29.743 "nvme_ioq_poll_period_us": 0, 00:17:29.743 "io_queue_requests": 0, 00:17:29.743 "delay_cmd_submit": true, 00:17:29.743 "bdev_retry_count": 3, 00:17:29.743 "transport_ack_timeout": 0, 00:17:29.743 "ctrlr_loss_timeout_sec": 0, 00:17:29.743 "reconnect_delay_sec": 0, 00:17:29.743 "fast_io_fail_timeout_sec": 0, 00:17:29.743 "generate_uuids": false, 00:17:29.743 "transport_tos": 0, 00:17:29.743 "io_path_stat": false, 00:17:29.743 "allow_accel_sequence": false 00:17:29.743 } 00:17:29.743 }, 00:17:29.743 { 00:17:29.743 "method": "bdev_nvme_set_hotplug", 00:17:29.743 "params": { 00:17:29.743 "period_us": 100000, 00:17:29.743 "enable": false 00:17:29.743 } 00:17:29.743 }, 00:17:29.743 { 00:17:29.743 "method": "bdev_malloc_create", 00:17:29.743 "params": { 00:17:29.743 "name": "malloc0", 00:17:29.743 "num_blocks": 8192, 00:17:29.743 "block_size": 4096, 00:17:29.743 "physical_block_size": 4096, 00:17:29.743 "uuid": "e9562178-d5dd-46e0-816c-c383990935ee", 00:17:29.743 "optimal_io_boundary": 0 00:17:29.743 } 00:17:29.743 }, 00:17:29.743 { 00:17:29.743 "method": "bdev_wait_for_examine" 00:17:29.743 } 00:17:29.743 ] 00:17:29.743 }, 00:17:29.743 { 00:17:29.743 "subsystem": "nbd", 00:17:29.743 "config": [] 00:17:29.743 }, 00:17:29.743 { 00:17:29.743 "subsystem": "scheduler", 00:17:29.743 "config": [ 00:17:29.743 { 00:17:29.744 "method": "framework_set_scheduler", 00:17:29.744 "params": { 00:17:29.744 "name": "static" 00:17:29.744 } 00:17:29.744 } 00:17:29.744 ] 00:17:29.744 }, 00:17:29.744 { 00:17:29.744 "subsystem": "nvmf", 00:17:29.744 "config": [ 00:17:29.744 { 00:17:29.744 "method": "nvmf_set_config", 00:17:29.744 "params": { 00:17:29.744 "discovery_filter": "match_any", 00:17:29.744 "admin_cmd_passthru": { 00:17:29.744 "identify_ctrlr": false 00:17:29.744 } 00:17:29.744 } 00:17:29.744 }, 00:17:29.744 { 00:17:29.744 "method": "nvmf_set_max_subsystems", 00:17:29.744 "params": { 00:17:29.744 "max_subsystems": 1024 00:17:29.744 } 00:17:29.744 }, 00:17:29.744 { 00:17:29.744 "method": "nvmf_set_crdt", 00:17:29.744 "params": { 00:17:29.744 "crdt1": 0, 00:17:29.744 "crdt2": 0, 00:17:29.744 "crdt3": 0 00:17:29.744 } 00:17:29.744 }, 00:17:29.744 { 00:17:29.744 "method": "nvmf_create_transport", 00:17:29.744 "params": { 00:17:29.744 "trtype": "TCP", 00:17:29.744 "max_queue_depth": 128, 00:17:29.744 "max_io_qpairs_per_ctrlr": 127, 00:17:29.744 "in_capsule_data_size": 4096, 00:17:29.744 "max_io_size": 131072, 00:17:29.744 "io_unit_size": 131072, 00:17:29.744 "max_aq_depth": 128, 00:17:29.744 "num_shared_buffers": 511, 00:17:29.744 "buf_cache_size": 4294967295, 00:17:29.744 "dif_insert_or_strip": false, 00:17:29.744 "zcopy": false, 00:17:29.744 "c2h_success": false, 00:17:29.744 "sock_priority": 0, 00:17:29.744 "abort_timeout_sec": 1 00:17:29.744 } 00:17:29.744 }, 00:17:29.744 { 00:17:29.744 "method": "nvmf_create_subsystem", 00:17:29.744 "params": { 00:17:29.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.744 "allow_any_host": false, 00:17:29.744 "serial_number": "SPDK00000000000001", 00:17:29.744 "model_number": "SPDK bdev Controller", 00:17:29.744 "max_namespaces": 10, 00:17:29.744 "min_cntlid": 1, 00:17:29.744 "max_cntlid": 65519, 00:17:29.744 "ana_reporting": false 00:17:29.744 } 00:17:29.744 }, 00:17:29.744 { 00:17:29.744 "method": "nvmf_subsystem_add_host", 00:17:29.744 "params": { 00:17:29.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.744 "host": "nqn.2016-06.io.spdk:host1", 00:17:29.744 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:17:29.744 } 00:17:29.744 }, 00:17:29.744 { 00:17:29.744 "method": "nvmf_subsystem_add_ns", 00:17:29.744 "params": { 00:17:29.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.744 "namespace": { 00:17:29.744 "nsid": 1, 00:17:29.744 "bdev_name": "malloc0", 00:17:29.744 "nguid": "E9562178D5DD46E0816CC383990935EE", 00:17:29.744 "uuid": "e9562178-d5dd-46e0-816c-c383990935ee" 00:17:29.744 } 00:17:29.744 } 00:17:29.744 }, 00:17:29.744 { 00:17:29.744 "method": "nvmf_subsystem_add_listener", 00:17:29.744 "params": { 00:17:29.744 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.744 "listen_address": { 00:17:29.744 "trtype": "TCP", 00:17:29.744 "adrfam": "IPv4", 00:17:29.744 "traddr": "10.0.0.2", 00:17:29.744 "trsvcid": "4420" 00:17:29.744 }, 00:17:29.744 "secure_channel": true 00:17:29.744 } 00:17:29.744 } 00:17:29.744 ] 00:17:29.744 } 00:17:29.744 ] 00:17:29.744 }' 00:17:29.744 15:41:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:29.744 15:41:08 -- common/autotest_common.sh@10 -- # set +x 00:17:29.744 15:41:08 -- nvmf/common.sh@469 -- # nvmfpid=2134945 00:17:29.744 15:41:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:29.744 15:41:08 -- nvmf/common.sh@470 -- # waitforlisten 2134945 00:17:29.744 15:41:08 -- common/autotest_common.sh@819 -- # '[' -z 2134945 ']' 00:17:29.744 15:41:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.744 15:41:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:29.744 15:41:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.744 15:41:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:29.744 15:41:08 -- common/autotest_common.sh@10 -- # set +x 00:17:29.744 [2024-07-10 15:41:09.018876] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:29.744 [2024-07-10 15:41:09.018973] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.744 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.744 [2024-07-10 15:41:09.087963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.002 [2024-07-10 15:41:09.200144] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:30.002 [2024-07-10 15:41:09.200315] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.002 [2024-07-10 15:41:09.200335] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.002 [2024-07-10 15:41:09.200349] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.002 [2024-07-10 15:41:09.200381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.260 [2024-07-10 15:41:09.427631] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.260 [2024-07-10 15:41:09.459662] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:30.260 [2024-07-10 15:41:09.459895] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.824 15:41:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:30.824 15:41:09 -- common/autotest_common.sh@852 -- # return 0 00:17:30.824 15:41:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:30.824 15:41:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:30.824 15:41:09 -- common/autotest_common.sh@10 -- # set +x 00:17:30.824 15:41:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.824 15:41:09 -- target/tls.sh@216 -- # bdevperf_pid=2135100 00:17:30.825 15:41:09 -- target/tls.sh@217 -- # waitforlisten 2135100 /var/tmp/bdevperf.sock 00:17:30.825 15:41:09 -- common/autotest_common.sh@819 -- # '[' -z 2135100 ']' 00:17:30.825 15:41:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:30.825 15:41:09 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:30.825 15:41:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:30.825 15:41:09 -- target/tls.sh@213 -- # echo '{ 00:17:30.825 "subsystems": [ 00:17:30.825 { 00:17:30.825 "subsystem": "iobuf", 00:17:30.825 "config": [ 00:17:30.825 { 00:17:30.825 "method": "iobuf_set_options", 00:17:30.825 "params": { 00:17:30.825 "small_pool_count": 8192, 00:17:30.825 "large_pool_count": 1024, 00:17:30.825 "small_bufsize": 8192, 00:17:30.825 "large_bufsize": 135168 00:17:30.825 } 00:17:30.825 } 00:17:30.825 ] 00:17:30.825 }, 00:17:30.825 { 00:17:30.825 "subsystem": "sock", 00:17:30.825 "config": [ 00:17:30.825 { 00:17:30.825 "method": "sock_impl_set_options", 00:17:30.825 "params": { 00:17:30.825 "impl_name": "posix", 00:17:30.825 "recv_buf_size": 2097152, 00:17:30.825 "send_buf_size": 2097152, 00:17:30.825 "enable_recv_pipe": true, 00:17:30.825 "enable_quickack": false, 00:17:30.825 "enable_placement_id": 0, 00:17:30.825 "enable_zerocopy_send_server": true, 00:17:30.825 "enable_zerocopy_send_client": false, 00:17:30.825 "zerocopy_threshold": 0, 00:17:30.825 "tls_version": 0, 00:17:30.825 "enable_ktls": false 00:17:30.825 } 00:17:30.825 }, 00:17:30.825 { 00:17:30.825 "method": "sock_impl_set_options", 00:17:30.825 "params": { 00:17:30.825 "impl_name": "ssl", 00:17:30.825 "recv_buf_size": 4096, 00:17:30.825 "send_buf_size": 4096, 00:17:30.825 "enable_recv_pipe": true, 00:17:30.825 "enable_quickack": false, 00:17:30.825 "enable_placement_id": 0, 00:17:30.825 "enable_zerocopy_send_server": true, 00:17:30.825 "enable_zerocopy_send_client": false, 00:17:30.825 "zerocopy_threshold": 0, 00:17:30.825 "tls_version": 0, 00:17:30.825 "enable_ktls": false 00:17:30.825 } 00:17:30.825 } 00:17:30.825 ] 00:17:30.825 }, 00:17:30.825 { 00:17:30.825 "subsystem": "vmd", 00:17:30.825 "config": [] 00:17:30.825 }, 00:17:30.825 { 00:17:30.825 "subsystem": "accel", 00:17:30.825 "config": [ 00:17:30.825 { 00:17:30.825 "method": "accel_set_options", 00:17:30.825 "params": { 00:17:30.825 "small_cache_size": 128, 00:17:30.825 "large_cache_size": 16, 00:17:30.825 "task_count": 2048, 00:17:30.825 "sequence_count": 2048, 00:17:30.825 "buf_count": 2048 00:17:30.825 } 00:17:30.825 } 00:17:30.825 ] 00:17:30.825 }, 00:17:30.825 { 00:17:30.825 "subsystem": "bdev", 00:17:30.825 "config": [ 00:17:30.825 { 00:17:30.825 "method": "bdev_set_options", 00:17:30.825 "params": { 00:17:30.825 "bdev_io_pool_size": 65535, 00:17:30.825 "bdev_io_cache_size": 256, 00:17:30.825 "bdev_auto_examine": true, 00:17:30.825 "iobuf_small_cache_size": 128, 00:17:30.825 "iobuf_large_cache_size": 16 00:17:30.825 } 00:17:30.825 }, 00:17:30.825 { 00:17:30.825 "method": "bdev_raid_set_options", 00:17:30.825 "params": { 00:17:30.825 "process_window_size_kb": 1024 00:17:30.825 } 00:17:30.825 }, 00:17:30.825 { 00:17:30.825 "method": "bdev_iscsi_set_options", 00:17:30.825 "params": { 00:17:30.825 "timeout_sec": 30 00:17:30.825 } 00:17:30.825 }, 00:17:30.825 { 00:17:30.825 "method": "bdev_nvme_set_options", 00:17:30.825 "params": { 00:17:30.825 "action_on_timeout": "none", 00:17:30.825 "timeout_us": 0, 00:17:30.825 "timeout_admin_us": 0, 00:17:30.825 "keep_alive_timeout_ms": 10000, 00:17:30.825 "transport_retry_count": 4, 00:17:30.825 "arbitration_burst": 0, 00:17:30.825 "low_priority_weight": 0, 00:17:30.825 "medium_priority_weight": 0, 00:17:30.825 "high_priority_weight": 0, 00:17:30.825 "nvme_adminq_poll_period_us": 10000, 00:17:30.825 "nvme_ioq_poll_period_us": 0, 00:17:30.825 "io_queue_requests": 512, 00:17:30.825 "delay_cmd_submit": true, 00:17:30.825 "bdev_retry_count": 3, 00:17:30.825 "transport_ack_timeout": 0, 00:17:30.825 "ctrlr_loss_timeout_sec": 0, 00:17:30.825 "reconnect_delay_sec": 0, 00:17:30.825 "fast_io_fail_timeout_sec": 0, 00:17:30.825 "generate_uuids": false, 00:17:30.825 "transport_tos": 0, 00:17:30.825 "io_path_stat": false, 00:17:30.825 "allow_accel_sequence": false 00:17:30.825 } 00:17:30.825 }, 00:17:30.825 { 00:17:30.825 "method": "bdev_nvme_attach_controller", 00:17:30.825 "params": { 00:17:30.825 "name": "TLSTEST", 00:17:30.825 "trtype": "TCP", 00:17:30.825 "adrfam": "IPv4", 00:17:30.825 "traddr": "10.0.0.2", 00:17:30.825 "trsvcid": "4420", 00:17:30.825 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.825 "prchk_reftag": false, 00:17:30.825 "prchk_guard": false, 00:17:30.825 "ctrlr_loss_timeout_sec": 0, 00:17:30.825 "reconnect_delay_sec": 0, 00:17:30.825 "fast_io_fail_timeout_sec": 0, 00:17:30.825 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:17:30.825 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:30.825 "hdgst": false, 00:17:30.825 "ddgst": false 00:17:30.825 } 00:17:30.825 }, 00:17:30.825 { 00:17:30.825 "method": "bdev_nvme_set_hotplug", 00:17:30.825 "params": { 00:17:30.825 "period_us": 100000, 00:17:30.825 "enable": false 00:17:30.825 } 00:17:30.825 }, 00:17:30.825 { 00:17:30.825 "method": "bdev_wait_for_examine" 00:17:30.825 } 00:17:30.825 ] 00:17:30.825 }, 00:17:30.825 { 00:17:30.825 "subsystem": "nbd", 00:17:30.825 "config": [] 00:17:30.825 } 00:17:30.825 ] 00:17:30.825 }' 00:17:30.825 15:41:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:30.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:30.825 15:41:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:30.825 15:41:09 -- common/autotest_common.sh@10 -- # set +x 00:17:30.825 [2024-07-10 15:41:09.993650] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:30.825 [2024-07-10 15:41:09.993738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2135100 ] 00:17:30.825 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.825 [2024-07-10 15:41:10.055796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.825 [2024-07-10 15:41:10.161620] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.082 [2024-07-10 15:41:10.322817] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:31.646 15:41:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:31.646 15:41:10 -- common/autotest_common.sh@852 -- # return 0 00:17:31.646 15:41:10 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:31.928 Running I/O for 10 seconds... 00:17:41.968 00:17:41.968 Latency(us) 00:17:41.968 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.968 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:41.968 Verification LBA range: start 0x0 length 0x2000 00:17:41.968 TLSTESTn1 : 10.02 2422.90 9.46 0.00 0.00 52759.46 8155.59 56312.41 00:17:41.968 =================================================================================================================== 00:17:41.968 Total : 2422.90 9.46 0.00 0.00 52759.46 8155.59 56312.41 00:17:41.968 0 00:17:41.968 15:41:21 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:41.968 15:41:21 -- target/tls.sh@223 -- # killprocess 2135100 00:17:41.968 15:41:21 -- common/autotest_common.sh@926 -- # '[' -z 2135100 ']' 00:17:41.968 15:41:21 -- common/autotest_common.sh@930 -- # kill -0 2135100 00:17:41.968 15:41:21 -- common/autotest_common.sh@931 -- # uname 00:17:41.968 15:41:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:41.968 15:41:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2135100 00:17:41.968 15:41:21 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:41.968 15:41:21 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:41.968 15:41:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2135100' 00:17:41.968 killing process with pid 2135100 00:17:41.968 15:41:21 -- common/autotest_common.sh@945 -- # kill 2135100 00:17:41.968 Received shutdown signal, test time was about 10.000000 seconds 00:17:41.968 00:17:41.968 Latency(us) 00:17:41.968 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.968 =================================================================================================================== 00:17:41.968 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:41.968 15:41:21 -- common/autotest_common.sh@950 -- # wait 2135100 00:17:42.226 15:41:21 -- target/tls.sh@224 -- # killprocess 2134945 00:17:42.226 15:41:21 -- common/autotest_common.sh@926 -- # '[' -z 2134945 ']' 00:17:42.226 15:41:21 -- common/autotest_common.sh@930 -- # kill -0 2134945 00:17:42.226 15:41:21 -- common/autotest_common.sh@931 -- # uname 00:17:42.226 15:41:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:42.226 15:41:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2134945 00:17:42.226 15:41:21 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:42.226 15:41:21 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:42.226 15:41:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2134945' 00:17:42.226 killing process with pid 2134945 00:17:42.226 15:41:21 -- common/autotest_common.sh@945 -- # kill 2134945 00:17:42.226 15:41:21 -- common/autotest_common.sh@950 -- # wait 2134945 00:17:42.485 15:41:21 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:42.485 15:41:21 -- target/tls.sh@227 -- # cleanup 00:17:42.485 15:41:21 -- target/tls.sh@15 -- # process_shm --id 0 00:17:42.485 15:41:21 -- common/autotest_common.sh@796 -- # type=--id 00:17:42.485 15:41:21 -- common/autotest_common.sh@797 -- # id=0 00:17:42.485 15:41:21 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:42.485 15:41:21 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:42.485 15:41:21 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:42.485 15:41:21 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:42.485 15:41:21 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:42.485 15:41:21 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:42.485 nvmf_trace.0 00:17:42.485 15:41:21 -- common/autotest_common.sh@811 -- # return 0 00:17:42.485 15:41:21 -- target/tls.sh@16 -- # killprocess 2135100 00:17:42.485 15:41:21 -- common/autotest_common.sh@926 -- # '[' -z 2135100 ']' 00:17:42.485 15:41:21 -- common/autotest_common.sh@930 -- # kill -0 2135100 00:17:42.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2135100) - No such process 00:17:42.485 15:41:21 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2135100 is not found' 00:17:42.485 Process with pid 2135100 is not found 00:17:42.485 15:41:21 -- target/tls.sh@17 -- # nvmftestfini 00:17:42.485 15:41:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:42.485 15:41:21 -- nvmf/common.sh@116 -- # sync 00:17:42.485 15:41:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:42.485 15:41:21 -- nvmf/common.sh@119 -- # set +e 00:17:42.485 15:41:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:42.485 15:41:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:42.485 rmmod nvme_tcp 00:17:42.485 rmmod nvme_fabrics 00:17:42.485 rmmod nvme_keyring 00:17:42.486 15:41:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:42.486 15:41:21 -- nvmf/common.sh@123 -- # set -e 00:17:42.486 15:41:21 -- nvmf/common.sh@124 -- # return 0 00:17:42.486 15:41:21 -- nvmf/common.sh@477 -- # '[' -n 2134945 ']' 00:17:42.486 15:41:21 -- nvmf/common.sh@478 -- # killprocess 2134945 00:17:42.486 15:41:21 -- common/autotest_common.sh@926 -- # '[' -z 2134945 ']' 00:17:42.486 15:41:21 -- common/autotest_common.sh@930 -- # kill -0 2134945 00:17:42.486 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2134945) - No such process 00:17:42.486 15:41:21 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2134945 is not found' 00:17:42.486 Process with pid 2134945 is not found 00:17:42.486 15:41:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:42.486 15:41:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:42.486 15:41:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:42.486 15:41:21 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.486 15:41:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:42.486 15:41:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.486 15:41:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.486 15:41:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.019 15:41:23 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:45.019 15:41:23 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:17:45.019 00:17:45.019 real 1m15.868s 00:17:45.019 user 1m57.094s 00:17:45.019 sys 0m27.059s 00:17:45.019 15:41:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:45.019 15:41:23 -- common/autotest_common.sh@10 -- # set +x 00:17:45.019 ************************************ 00:17:45.019 END TEST nvmf_tls 00:17:45.019 ************************************ 00:17:45.019 15:41:23 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:45.019 15:41:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:45.019 15:41:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:45.019 15:41:23 -- common/autotest_common.sh@10 -- # set +x 00:17:45.019 ************************************ 00:17:45.019 START TEST nvmf_fips 00:17:45.019 ************************************ 00:17:45.019 15:41:23 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:45.019 * Looking for test storage... 00:17:45.019 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:17:45.019 15:41:23 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:45.019 15:41:23 -- nvmf/common.sh@7 -- # uname -s 00:17:45.019 15:41:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:45.019 15:41:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:45.019 15:41:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:45.019 15:41:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:45.019 15:41:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:45.019 15:41:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:45.019 15:41:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:45.019 15:41:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:45.019 15:41:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:45.019 15:41:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:45.019 15:41:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.019 15:41:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:45.019 15:41:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:45.019 15:41:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:45.019 15:41:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:45.019 15:41:23 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:45.019 15:41:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:45.019 15:41:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:45.019 15:41:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:45.019 15:41:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.019 15:41:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.019 15:41:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.019 15:41:23 -- paths/export.sh@5 -- # export PATH 00:17:45.019 15:41:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:45.019 15:41:23 -- nvmf/common.sh@46 -- # : 0 00:17:45.019 15:41:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:45.019 15:41:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:45.019 15:41:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:45.020 15:41:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:45.020 15:41:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:45.020 15:41:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:45.020 15:41:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:45.020 15:41:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:45.020 15:41:23 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:45.020 15:41:23 -- fips/fips.sh@89 -- # check_openssl_version 00:17:45.020 15:41:23 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:45.020 15:41:23 -- fips/fips.sh@85 -- # openssl version 00:17:45.020 15:41:23 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:45.020 15:41:23 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:45.020 15:41:23 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:45.020 15:41:23 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:45.020 15:41:23 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:45.020 15:41:23 -- scripts/common.sh@335 -- # IFS=.-: 00:17:45.020 15:41:23 -- scripts/common.sh@335 -- # read -ra ver1 00:17:45.020 15:41:23 -- scripts/common.sh@336 -- # IFS=.-: 00:17:45.020 15:41:23 -- scripts/common.sh@336 -- # read -ra ver2 00:17:45.020 15:41:23 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:45.020 15:41:23 -- scripts/common.sh@339 -- # ver1_l=3 00:17:45.020 15:41:23 -- scripts/common.sh@340 -- # ver2_l=3 00:17:45.020 15:41:23 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:45.020 15:41:23 -- scripts/common.sh@343 -- # case "$op" in 00:17:45.020 15:41:23 -- scripts/common.sh@347 -- # : 1 00:17:45.020 15:41:23 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:45.020 15:41:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.020 15:41:23 -- scripts/common.sh@364 -- # decimal 3 00:17:45.020 15:41:23 -- scripts/common.sh@352 -- # local d=3 00:17:45.020 15:41:23 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:45.020 15:41:23 -- scripts/common.sh@354 -- # echo 3 00:17:45.020 15:41:23 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:45.020 15:41:23 -- scripts/common.sh@365 -- # decimal 3 00:17:45.020 15:41:23 -- scripts/common.sh@352 -- # local d=3 00:17:45.020 15:41:23 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:45.020 15:41:23 -- scripts/common.sh@354 -- # echo 3 00:17:45.020 15:41:23 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:45.020 15:41:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:45.020 15:41:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:45.020 15:41:23 -- scripts/common.sh@363 -- # (( v++ )) 00:17:45.020 15:41:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.020 15:41:23 -- scripts/common.sh@364 -- # decimal 0 00:17:45.020 15:41:23 -- scripts/common.sh@352 -- # local d=0 00:17:45.020 15:41:23 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:45.020 15:41:23 -- scripts/common.sh@354 -- # echo 0 00:17:45.020 15:41:23 -- scripts/common.sh@364 -- # ver1[v]=0 00:17:45.020 15:41:23 -- scripts/common.sh@365 -- # decimal 0 00:17:45.020 15:41:23 -- scripts/common.sh@352 -- # local d=0 00:17:45.020 15:41:23 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:45.020 15:41:23 -- scripts/common.sh@354 -- # echo 0 00:17:45.020 15:41:23 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:45.020 15:41:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:45.020 15:41:23 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:45.020 15:41:23 -- scripts/common.sh@363 -- # (( v++ )) 00:17:45.020 15:41:23 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:45.020 15:41:23 -- scripts/common.sh@364 -- # decimal 9 00:17:45.020 15:41:23 -- scripts/common.sh@352 -- # local d=9 00:17:45.020 15:41:23 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:45.020 15:41:23 -- scripts/common.sh@354 -- # echo 9 00:17:45.020 15:41:23 -- scripts/common.sh@364 -- # ver1[v]=9 00:17:45.020 15:41:23 -- scripts/common.sh@365 -- # decimal 0 00:17:45.020 15:41:23 -- scripts/common.sh@352 -- # local d=0 00:17:45.020 15:41:23 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:45.020 15:41:23 -- scripts/common.sh@354 -- # echo 0 00:17:45.020 15:41:23 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:45.020 15:41:23 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:45.020 15:41:23 -- scripts/common.sh@366 -- # return 0 00:17:45.020 15:41:23 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:45.020 15:41:23 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:45.020 15:41:23 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:45.020 15:41:23 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:45.020 15:41:23 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:45.020 15:41:23 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:45.020 15:41:23 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:45.020 15:41:23 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:45.020 15:41:23 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:45.020 15:41:23 -- fips/fips.sh@114 -- # build_openssl_config 00:17:45.020 15:41:23 -- fips/fips.sh@37 -- # cat 00:17:45.020 15:41:23 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:45.020 15:41:23 -- fips/fips.sh@58 -- # cat - 00:17:45.020 15:41:23 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:45.020 15:41:23 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:45.020 15:41:23 -- fips/fips.sh@117 -- # mapfile -t providers 00:17:45.020 15:41:23 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:17:45.020 15:41:23 -- fips/fips.sh@117 -- # openssl list -providers 00:17:45.020 15:41:23 -- fips/fips.sh@117 -- # grep name 00:17:45.020 15:41:24 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:45.020 15:41:24 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:45.020 15:41:24 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:45.020 15:41:24 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:45.020 15:41:24 -- fips/fips.sh@128 -- # : 00:17:45.020 15:41:24 -- common/autotest_common.sh@640 -- # local es=0 00:17:45.020 15:41:24 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:45.020 15:41:24 -- common/autotest_common.sh@628 -- # local arg=openssl 00:17:45.020 15:41:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:45.020 15:41:24 -- common/autotest_common.sh@632 -- # type -t openssl 00:17:45.020 15:41:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:45.020 15:41:24 -- common/autotest_common.sh@634 -- # type -P openssl 00:17:45.020 15:41:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:45.020 15:41:24 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:17:45.020 15:41:24 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:17:45.020 15:41:24 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:17:45.020 Error setting digest 00:17:45.020 003216F3977F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:45.020 003216F3977F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:45.020 15:41:24 -- common/autotest_common.sh@643 -- # es=1 00:17:45.020 15:41:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:45.020 15:41:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:45.020 15:41:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:45.020 15:41:24 -- fips/fips.sh@131 -- # nvmftestinit 00:17:45.020 15:41:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:45.020 15:41:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:45.020 15:41:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:45.020 15:41:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:45.020 15:41:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:45.020 15:41:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:45.020 15:41:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:45.020 15:41:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:45.020 15:41:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:45.020 15:41:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:45.020 15:41:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:45.020 15:41:24 -- common/autotest_common.sh@10 -- # set +x 00:17:46.921 15:41:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:46.921 15:41:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:46.921 15:41:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:46.921 15:41:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:46.921 15:41:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:46.921 15:41:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:46.921 15:41:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:46.921 15:41:25 -- nvmf/common.sh@294 -- # net_devs=() 00:17:46.921 15:41:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:46.921 15:41:25 -- nvmf/common.sh@295 -- # e810=() 00:17:46.921 15:41:25 -- nvmf/common.sh@295 -- # local -ga e810 00:17:46.921 15:41:25 -- nvmf/common.sh@296 -- # x722=() 00:17:46.921 15:41:25 -- nvmf/common.sh@296 -- # local -ga x722 00:17:46.921 15:41:25 -- nvmf/common.sh@297 -- # mlx=() 00:17:46.921 15:41:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:46.921 15:41:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:46.921 15:41:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:46.921 15:41:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:46.921 15:41:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:46.921 15:41:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:46.921 15:41:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:46.921 15:41:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:46.921 15:41:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:46.921 15:41:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:46.921 15:41:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:46.921 15:41:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:46.921 15:41:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:46.921 15:41:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:46.921 15:41:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:46.921 15:41:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:46.921 15:41:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:46.921 15:41:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:46.921 15:41:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:46.921 15:41:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:46.921 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:46.921 15:41:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:46.921 15:41:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:46.921 15:41:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.921 15:41:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.921 15:41:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:46.921 15:41:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:46.921 15:41:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:46.921 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:46.921 15:41:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:46.921 15:41:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:46.921 15:41:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:46.921 15:41:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:46.921 15:41:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:46.921 15:41:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:46.921 15:41:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:46.921 15:41:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:46.921 15:41:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:46.921 15:41:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.921 15:41:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:46.921 15:41:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.921 15:41:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:46.921 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:46.921 15:41:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.921 15:41:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:46.921 15:41:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:46.921 15:41:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:46.921 15:41:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:46.921 15:41:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:46.921 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:46.921 15:41:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:46.921 15:41:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:46.921 15:41:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:46.921 15:41:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:46.921 15:41:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:46.921 15:41:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:46.921 15:41:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:46.921 15:41:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:46.921 15:41:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:46.921 15:41:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:46.921 15:41:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:46.921 15:41:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:46.921 15:41:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:46.921 15:41:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:46.921 15:41:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:46.921 15:41:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:46.921 15:41:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:46.921 15:41:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:46.921 15:41:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:46.921 15:41:26 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:46.921 15:41:26 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:46.921 15:41:26 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:46.921 15:41:26 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.921 15:41:26 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.921 15:41:26 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.921 15:41:26 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:46.921 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.921 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:17:46.921 00:17:46.921 --- 10.0.0.2 ping statistics --- 00:17:46.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.921 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:17:46.921 15:41:26 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:17:46.921 00:17:46.922 --- 10.0.0.1 ping statistics --- 00:17:46.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.922 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:17:46.922 15:41:26 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.922 15:41:26 -- nvmf/common.sh@410 -- # return 0 00:17:46.922 15:41:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:46.922 15:41:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.922 15:41:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:46.922 15:41:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:46.922 15:41:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.922 15:41:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:46.922 15:41:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:46.922 15:41:26 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:46.922 15:41:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:46.922 15:41:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:46.922 15:41:26 -- common/autotest_common.sh@10 -- # set +x 00:17:46.922 15:41:26 -- nvmf/common.sh@469 -- # nvmfpid=2138437 00:17:46.922 15:41:26 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:46.922 15:41:26 -- nvmf/common.sh@470 -- # waitforlisten 2138437 00:17:46.922 15:41:26 -- common/autotest_common.sh@819 -- # '[' -z 2138437 ']' 00:17:46.922 15:41:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.922 15:41:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:46.922 15:41:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.922 15:41:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:46.922 15:41:26 -- common/autotest_common.sh@10 -- # set +x 00:17:46.922 [2024-07-10 15:41:26.229154] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:46.922 [2024-07-10 15:41:26.229254] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.922 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.922 [2024-07-10 15:41:26.293170] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.180 [2024-07-10 15:41:26.405159] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:47.180 [2024-07-10 15:41:26.405327] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.180 [2024-07-10 15:41:26.405355] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.180 [2024-07-10 15:41:26.405378] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.180 [2024-07-10 15:41:26.405417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.114 15:41:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:48.114 15:41:27 -- common/autotest_common.sh@852 -- # return 0 00:17:48.114 15:41:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:48.114 15:41:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:48.114 15:41:27 -- common/autotest_common.sh@10 -- # set +x 00:17:48.114 15:41:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.114 15:41:27 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:48.114 15:41:27 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:48.114 15:41:27 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:48.114 15:41:27 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:48.114 15:41:27 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:48.114 15:41:27 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:48.114 15:41:27 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:48.114 15:41:27 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:48.114 [2024-07-10 15:41:27.373969] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.114 [2024-07-10 15:41:27.389963] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:48.115 [2024-07-10 15:41:27.390197] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.115 malloc0 00:17:48.115 15:41:27 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:48.115 15:41:27 -- fips/fips.sh@148 -- # bdevperf_pid=2138605 00:17:48.115 15:41:27 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:48.115 15:41:27 -- fips/fips.sh@149 -- # waitforlisten 2138605 /var/tmp/bdevperf.sock 00:17:48.115 15:41:27 -- common/autotest_common.sh@819 -- # '[' -z 2138605 ']' 00:17:48.115 15:41:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:48.115 15:41:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:48.115 15:41:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:48.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:48.115 15:41:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:48.115 15:41:27 -- common/autotest_common.sh@10 -- # set +x 00:17:48.373 [2024-07-10 15:41:27.507085] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:48.373 [2024-07-10 15:41:27.507160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138605 ] 00:17:48.373 EAL: No free 2048 kB hugepages reported on node 1 00:17:48.373 [2024-07-10 15:41:27.567351] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.373 [2024-07-10 15:41:27.672722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:49.306 15:41:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:49.306 15:41:28 -- common/autotest_common.sh@852 -- # return 0 00:17:49.306 15:41:28 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:17:49.564 [2024-07-10 15:41:28.699854] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:49.564 TLSTESTn1 00:17:49.564 15:41:28 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:49.564 Running I/O for 10 seconds... 00:18:01.754 00:18:01.754 Latency(us) 00:18:01.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.754 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:01.754 Verification LBA range: start 0x0 length 0x2000 00:18:01.754 TLSTESTn1 : 10.03 2439.32 9.53 0.00 0.00 52400.88 6893.42 56700.78 00:18:01.754 =================================================================================================================== 00:18:01.754 Total : 2439.32 9.53 0.00 0.00 52400.88 6893.42 56700.78 00:18:01.754 0 00:18:01.754 15:41:38 -- fips/fips.sh@1 -- # cleanup 00:18:01.754 15:41:38 -- fips/fips.sh@15 -- # process_shm --id 0 00:18:01.754 15:41:38 -- common/autotest_common.sh@796 -- # type=--id 00:18:01.754 15:41:38 -- common/autotest_common.sh@797 -- # id=0 00:18:01.754 15:41:38 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:18:01.754 15:41:38 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:01.754 15:41:38 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:18:01.754 15:41:38 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:18:01.754 15:41:38 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:18:01.754 15:41:38 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:01.754 nvmf_trace.0 00:18:01.754 15:41:38 -- common/autotest_common.sh@811 -- # return 0 00:18:01.754 15:41:38 -- fips/fips.sh@16 -- # killprocess 2138605 00:18:01.754 15:41:38 -- common/autotest_common.sh@926 -- # '[' -z 2138605 ']' 00:18:01.754 15:41:38 -- common/autotest_common.sh@930 -- # kill -0 2138605 00:18:01.754 15:41:38 -- common/autotest_common.sh@931 -- # uname 00:18:01.754 15:41:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:01.754 15:41:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2138605 00:18:01.754 15:41:39 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:01.754 15:41:39 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:01.754 15:41:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2138605' 00:18:01.754 killing process with pid 2138605 00:18:01.754 15:41:39 -- common/autotest_common.sh@945 -- # kill 2138605 00:18:01.754 Received shutdown signal, test time was about 10.000000 seconds 00:18:01.754 00:18:01.754 Latency(us) 00:18:01.754 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:01.754 =================================================================================================================== 00:18:01.754 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:01.754 15:41:39 -- common/autotest_common.sh@950 -- # wait 2138605 00:18:01.754 15:41:39 -- fips/fips.sh@17 -- # nvmftestfini 00:18:01.754 15:41:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:01.754 15:41:39 -- nvmf/common.sh@116 -- # sync 00:18:01.754 15:41:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:01.754 15:41:39 -- nvmf/common.sh@119 -- # set +e 00:18:01.754 15:41:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:01.754 15:41:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:01.754 rmmod nvme_tcp 00:18:01.754 rmmod nvme_fabrics 00:18:01.754 rmmod nvme_keyring 00:18:01.754 15:41:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:01.754 15:41:39 -- nvmf/common.sh@123 -- # set -e 00:18:01.754 15:41:39 -- nvmf/common.sh@124 -- # return 0 00:18:01.754 15:41:39 -- nvmf/common.sh@477 -- # '[' -n 2138437 ']' 00:18:01.754 15:41:39 -- nvmf/common.sh@478 -- # killprocess 2138437 00:18:01.754 15:41:39 -- common/autotest_common.sh@926 -- # '[' -z 2138437 ']' 00:18:01.754 15:41:39 -- common/autotest_common.sh@930 -- # kill -0 2138437 00:18:01.754 15:41:39 -- common/autotest_common.sh@931 -- # uname 00:18:01.754 15:41:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:01.754 15:41:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2138437 00:18:01.754 15:41:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:01.754 15:41:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:01.754 15:41:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2138437' 00:18:01.754 killing process with pid 2138437 00:18:01.754 15:41:39 -- common/autotest_common.sh@945 -- # kill 2138437 00:18:01.754 15:41:39 -- common/autotest_common.sh@950 -- # wait 2138437 00:18:01.754 15:41:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:01.754 15:41:39 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:01.754 15:41:39 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:01.754 15:41:39 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:01.754 15:41:39 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:01.754 15:41:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.754 15:41:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:01.754 15:41:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.686 15:41:41 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:02.686 15:41:41 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:02.686 00:18:02.686 real 0m17.828s 00:18:02.686 user 0m22.069s 00:18:02.686 sys 0m7.158s 00:18:02.686 15:41:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:02.686 15:41:41 -- common/autotest_common.sh@10 -- # set +x 00:18:02.686 ************************************ 00:18:02.686 END TEST nvmf_fips 00:18:02.686 ************************************ 00:18:02.686 15:41:41 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:18:02.686 15:41:41 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:02.686 15:41:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:02.686 15:41:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:02.686 15:41:41 -- common/autotest_common.sh@10 -- # set +x 00:18:02.686 ************************************ 00:18:02.686 START TEST nvmf_fuzz 00:18:02.686 ************************************ 00:18:02.686 15:41:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:02.686 * Looking for test storage... 00:18:02.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:02.686 15:41:41 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.686 15:41:41 -- nvmf/common.sh@7 -- # uname -s 00:18:02.686 15:41:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.686 15:41:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.686 15:41:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.686 15:41:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.686 15:41:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.686 15:41:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.686 15:41:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.686 15:41:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.686 15:41:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.686 15:41:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.686 15:41:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:02.686 15:41:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:02.686 15:41:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.686 15:41:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.686 15:41:41 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.686 15:41:41 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.686 15:41:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.686 15:41:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.686 15:41:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.686 15:41:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.686 15:41:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.686 15:41:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.686 15:41:41 -- paths/export.sh@5 -- # export PATH 00:18:02.686 15:41:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.687 15:41:41 -- nvmf/common.sh@46 -- # : 0 00:18:02.687 15:41:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:02.687 15:41:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:02.687 15:41:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:02.687 15:41:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.687 15:41:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.687 15:41:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:02.687 15:41:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:02.687 15:41:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:02.687 15:41:41 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:02.687 15:41:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:02.687 15:41:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.687 15:41:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:02.687 15:41:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:02.687 15:41:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:02.687 15:41:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.687 15:41:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.687 15:41:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.687 15:41:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:02.687 15:41:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:02.687 15:41:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:02.687 15:41:41 -- common/autotest_common.sh@10 -- # set +x 00:18:04.587 15:41:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:04.587 15:41:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:04.587 15:41:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:04.587 15:41:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:04.587 15:41:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:04.587 15:41:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:04.587 15:41:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:04.587 15:41:43 -- nvmf/common.sh@294 -- # net_devs=() 00:18:04.587 15:41:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:04.587 15:41:43 -- nvmf/common.sh@295 -- # e810=() 00:18:04.587 15:41:43 -- nvmf/common.sh@295 -- # local -ga e810 00:18:04.587 15:41:43 -- nvmf/common.sh@296 -- # x722=() 00:18:04.587 15:41:43 -- nvmf/common.sh@296 -- # local -ga x722 00:18:04.587 15:41:43 -- nvmf/common.sh@297 -- # mlx=() 00:18:04.587 15:41:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:04.587 15:41:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:04.587 15:41:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:04.587 15:41:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:04.587 15:41:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:04.587 15:41:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:04.587 15:41:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:04.587 15:41:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:04.587 15:41:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:04.587 15:41:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:04.587 15:41:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:04.587 15:41:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:04.587 15:41:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:04.587 15:41:43 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:04.587 15:41:43 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:04.587 15:41:43 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:04.587 15:41:43 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:04.587 15:41:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:04.587 15:41:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:04.587 15:41:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:04.587 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:04.587 15:41:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:04.587 15:41:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:04.587 15:41:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:04.587 15:41:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:04.587 15:41:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:04.587 15:41:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:04.587 15:41:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:04.587 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:04.587 15:41:43 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:04.587 15:41:43 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:04.587 15:41:43 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:04.587 15:41:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:04.587 15:41:43 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:04.587 15:41:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:04.587 15:41:43 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:04.587 15:41:43 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:04.587 15:41:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:04.587 15:41:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.587 15:41:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:04.587 15:41:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.587 15:41:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:04.587 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:04.587 15:41:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.587 15:41:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:04.587 15:41:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.587 15:41:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:04.587 15:41:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.587 15:41:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:04.587 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:04.587 15:41:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.587 15:41:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:04.587 15:41:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:04.587 15:41:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:04.587 15:41:43 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:04.587 15:41:43 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:04.587 15:41:43 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:04.588 15:41:43 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:04.588 15:41:43 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:04.588 15:41:43 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:04.588 15:41:43 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:04.588 15:41:43 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:04.588 15:41:43 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:04.588 15:41:43 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:04.588 15:41:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:04.588 15:41:43 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:04.588 15:41:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:04.588 15:41:43 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:04.588 15:41:43 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:04.588 15:41:43 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:04.588 15:41:43 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:04.588 15:41:43 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:04.588 15:41:43 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:04.588 15:41:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:04.588 15:41:43 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:04.588 15:41:43 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:04.588 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:04.588 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:18:04.588 00:18:04.588 --- 10.0.0.2 ping statistics --- 00:18:04.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.588 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:18:04.588 15:41:43 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:04.588 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:04.588 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:18:04.588 00:18:04.588 --- 10.0.0.1 ping statistics --- 00:18:04.588 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.588 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:18:04.588 15:41:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:04.588 15:41:43 -- nvmf/common.sh@410 -- # return 0 00:18:04.588 15:41:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:04.588 15:41:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:04.588 15:41:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:04.588 15:41:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:04.588 15:41:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:04.588 15:41:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:04.588 15:41:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:04.588 15:41:43 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2142035 00:18:04.588 15:41:43 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:04.588 15:41:43 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:04.588 15:41:43 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2142035 00:18:04.588 15:41:43 -- common/autotest_common.sh@819 -- # '[' -z 2142035 ']' 00:18:04.588 15:41:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.588 15:41:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:04.588 15:41:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.588 15:41:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:04.588 15:41:43 -- common/autotest_common.sh@10 -- # set +x 00:18:05.520 15:41:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:05.520 15:41:44 -- common/autotest_common.sh@852 -- # return 0 00:18:05.520 15:41:44 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:05.520 15:41:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.520 15:41:44 -- common/autotest_common.sh@10 -- # set +x 00:18:05.779 15:41:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.779 15:41:44 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:05.779 15:41:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.779 15:41:44 -- common/autotest_common.sh@10 -- # set +x 00:18:05.779 Malloc0 00:18:05.779 15:41:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.779 15:41:44 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:05.779 15:41:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.779 15:41:44 -- common/autotest_common.sh@10 -- # set +x 00:18:05.779 15:41:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.779 15:41:44 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:05.779 15:41:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.779 15:41:44 -- common/autotest_common.sh@10 -- # set +x 00:18:05.779 15:41:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.779 15:41:44 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.779 15:41:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:05.779 15:41:44 -- common/autotest_common.sh@10 -- # set +x 00:18:05.779 15:41:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:05.779 15:41:44 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:18:05.779 15:41:44 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:18:37.879 Fuzzing completed. Shutting down the fuzz application 00:18:37.879 00:18:37.879 Dumping successful admin opcodes: 00:18:37.879 8, 9, 10, 24, 00:18:37.879 Dumping successful io opcodes: 00:18:37.879 0, 9, 00:18:37.879 NS: 0x200003aeff00 I/O qp, Total commands completed: 470170, total successful commands: 2711, random_seed: 440952000 00:18:37.879 NS: 0x200003aeff00 admin qp, Total commands completed: 56608, total successful commands: 449, random_seed: 52069120 00:18:37.879 15:42:15 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:37.880 Fuzzing completed. Shutting down the fuzz application 00:18:37.880 00:18:37.880 Dumping successful admin opcodes: 00:18:37.880 24, 00:18:37.880 Dumping successful io opcodes: 00:18:37.880 00:18:37.880 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1143059733 00:18:37.880 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 1143199005 00:18:37.880 15:42:16 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:37.880 15:42:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:37.880 15:42:16 -- common/autotest_common.sh@10 -- # set +x 00:18:37.880 15:42:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:37.880 15:42:16 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:37.880 15:42:16 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:37.880 15:42:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:37.880 15:42:16 -- nvmf/common.sh@116 -- # sync 00:18:37.880 15:42:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:37.880 15:42:16 -- nvmf/common.sh@119 -- # set +e 00:18:37.880 15:42:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:37.880 15:42:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:37.880 rmmod nvme_tcp 00:18:37.880 rmmod nvme_fabrics 00:18:37.880 rmmod nvme_keyring 00:18:37.880 15:42:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:37.880 15:42:16 -- nvmf/common.sh@123 -- # set -e 00:18:37.880 15:42:16 -- nvmf/common.sh@124 -- # return 0 00:18:37.880 15:42:16 -- nvmf/common.sh@477 -- # '[' -n 2142035 ']' 00:18:37.880 15:42:16 -- nvmf/common.sh@478 -- # killprocess 2142035 00:18:37.880 15:42:16 -- common/autotest_common.sh@926 -- # '[' -z 2142035 ']' 00:18:37.880 15:42:16 -- common/autotest_common.sh@930 -- # kill -0 2142035 00:18:37.880 15:42:16 -- common/autotest_common.sh@931 -- # uname 00:18:37.880 15:42:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:37.880 15:42:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2142035 00:18:37.880 15:42:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:37.880 15:42:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:37.880 15:42:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2142035' 00:18:37.880 killing process with pid 2142035 00:18:37.880 15:42:16 -- common/autotest_common.sh@945 -- # kill 2142035 00:18:37.880 15:42:16 -- common/autotest_common.sh@950 -- # wait 2142035 00:18:37.880 15:42:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:37.880 15:42:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:37.880 15:42:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:37.880 15:42:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:37.880 15:42:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:37.880 15:42:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.880 15:42:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.880 15:42:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.411 15:42:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:40.411 15:42:19 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:18:40.411 00:18:40.411 real 0m37.516s 00:18:40.411 user 0m51.591s 00:18:40.411 sys 0m15.633s 00:18:40.411 15:42:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:40.411 15:42:19 -- common/autotest_common.sh@10 -- # set +x 00:18:40.411 ************************************ 00:18:40.411 END TEST nvmf_fuzz 00:18:40.411 ************************************ 00:18:40.411 15:42:19 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:40.411 15:42:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:40.411 15:42:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:40.411 15:42:19 -- common/autotest_common.sh@10 -- # set +x 00:18:40.411 ************************************ 00:18:40.411 START TEST nvmf_multiconnection 00:18:40.411 ************************************ 00:18:40.411 15:42:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:40.411 * Looking for test storage... 00:18:40.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:40.411 15:42:19 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:40.411 15:42:19 -- nvmf/common.sh@7 -- # uname -s 00:18:40.411 15:42:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.411 15:42:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.411 15:42:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.411 15:42:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.411 15:42:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.411 15:42:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.411 15:42:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.411 15:42:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.411 15:42:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.411 15:42:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.411 15:42:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:40.411 15:42:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:40.411 15:42:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.411 15:42:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.411 15:42:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:40.411 15:42:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:40.411 15:42:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.411 15:42:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.411 15:42:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.411 15:42:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.411 15:42:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.411 15:42:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.411 15:42:19 -- paths/export.sh@5 -- # export PATH 00:18:40.411 15:42:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.411 15:42:19 -- nvmf/common.sh@46 -- # : 0 00:18:40.411 15:42:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:40.411 15:42:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:40.411 15:42:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:40.411 15:42:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.411 15:42:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.411 15:42:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:40.411 15:42:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:40.411 15:42:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:40.411 15:42:19 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:40.411 15:42:19 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:40.411 15:42:19 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:40.411 15:42:19 -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:40.411 15:42:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:40.411 15:42:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.411 15:42:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:40.411 15:42:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:40.411 15:42:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:40.411 15:42:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.411 15:42:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:40.411 15:42:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.411 15:42:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:40.411 15:42:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:40.411 15:42:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:40.411 15:42:19 -- common/autotest_common.sh@10 -- # set +x 00:18:42.313 15:42:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:42.313 15:42:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:42.313 15:42:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:42.313 15:42:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:42.313 15:42:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:42.313 15:42:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:42.313 15:42:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:42.313 15:42:21 -- nvmf/common.sh@294 -- # net_devs=() 00:18:42.313 15:42:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:42.313 15:42:21 -- nvmf/common.sh@295 -- # e810=() 00:18:42.313 15:42:21 -- nvmf/common.sh@295 -- # local -ga e810 00:18:42.313 15:42:21 -- nvmf/common.sh@296 -- # x722=() 00:18:42.313 15:42:21 -- nvmf/common.sh@296 -- # local -ga x722 00:18:42.313 15:42:21 -- nvmf/common.sh@297 -- # mlx=() 00:18:42.313 15:42:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:42.313 15:42:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:42.313 15:42:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:42.313 15:42:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:42.313 15:42:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:42.313 15:42:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:42.314 15:42:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:42.314 15:42:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:42.314 15:42:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:42.314 15:42:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:42.314 15:42:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:42.314 15:42:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:42.314 15:42:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:42.314 15:42:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:42.314 15:42:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:42.314 15:42:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:42.314 15:42:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:42.314 15:42:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:42.314 15:42:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:42.314 15:42:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:42.314 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:42.314 15:42:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:42.314 15:42:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:42.314 15:42:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.314 15:42:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.314 15:42:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:42.314 15:42:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:42.314 15:42:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:42.314 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:42.314 15:42:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:42.314 15:42:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:42.314 15:42:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:42.314 15:42:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:42.314 15:42:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:42.314 15:42:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:42.314 15:42:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:42.314 15:42:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:42.314 15:42:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:42.314 15:42:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.314 15:42:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:42.314 15:42:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.314 15:42:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:42.314 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:42.314 15:42:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.314 15:42:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:42.314 15:42:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:42.314 15:42:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:42.314 15:42:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:42.314 15:42:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:42.314 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:42.314 15:42:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:42.314 15:42:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:42.314 15:42:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:42.314 15:42:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:42.314 15:42:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:42.314 15:42:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:42.314 15:42:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:42.314 15:42:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:42.314 15:42:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:42.314 15:42:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:42.314 15:42:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:42.314 15:42:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:42.314 15:42:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:42.314 15:42:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:42.314 15:42:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:42.314 15:42:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:42.314 15:42:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:42.314 15:42:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:42.314 15:42:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:42.314 15:42:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:42.314 15:42:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:42.314 15:42:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:42.314 15:42:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:42.314 15:42:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:42.314 15:42:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:42.314 15:42:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:42.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:42.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:18:42.314 00:18:42.314 --- 10.0.0.2 ping statistics --- 00:18:42.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.314 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:18:42.314 15:42:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:42.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:42.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:18:42.314 00:18:42.314 --- 10.0.0.1 ping statistics --- 00:18:42.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:42.314 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:18:42.314 15:42:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:42.314 15:42:21 -- nvmf/common.sh@410 -- # return 0 00:18:42.314 15:42:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:42.314 15:42:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:42.314 15:42:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:42.314 15:42:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:42.314 15:42:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:42.314 15:42:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:42.314 15:42:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:42.314 15:42:21 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:42.314 15:42:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:42.314 15:42:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:42.314 15:42:21 -- common/autotest_common.sh@10 -- # set +x 00:18:42.314 15:42:21 -- nvmf/common.sh@469 -- # nvmfpid=2148516 00:18:42.314 15:42:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:42.314 15:42:21 -- nvmf/common.sh@470 -- # waitforlisten 2148516 00:18:42.314 15:42:21 -- common/autotest_common.sh@819 -- # '[' -z 2148516 ']' 00:18:42.314 15:42:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:42.314 15:42:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:42.314 15:42:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:42.314 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:42.314 15:42:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:42.314 15:42:21 -- common/autotest_common.sh@10 -- # set +x 00:18:42.314 [2024-07-10 15:42:21.402012] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:42.314 [2024-07-10 15:42:21.402082] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:42.314 EAL: No free 2048 kB hugepages reported on node 1 00:18:42.314 [2024-07-10 15:42:21.465345] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:42.314 [2024-07-10 15:42:21.571395] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:42.314 [2024-07-10 15:42:21.571567] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:42.314 [2024-07-10 15:42:21.571585] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:42.314 [2024-07-10 15:42:21.571597] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:42.314 [2024-07-10 15:42:21.571662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:42.314 [2024-07-10 15:42:21.571715] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:42.314 [2024-07-10 15:42:21.571781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:42.314 [2024-07-10 15:42:21.571784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.249 15:42:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:43.249 15:42:22 -- common/autotest_common.sh@852 -- # return 0 00:18:43.249 15:42:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:43.249 15:42:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:43.249 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.249 15:42:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:43.249 15:42:22 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:43.249 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.249 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.249 [2024-07-10 15:42:22.417063] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:43.249 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.249 15:42:22 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:43.249 15:42:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.249 15:42:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:43.249 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.249 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.249 Malloc1 00:18:43.249 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.249 15:42:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:43.249 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.249 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.249 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.249 15:42:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:43.249 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.249 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.249 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.249 15:42:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:43.249 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.249 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.249 [2024-07-10 15:42:22.474694] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:43.249 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.249 15:42:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.249 15:42:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:43.249 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.249 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.249 Malloc2 00:18:43.249 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.249 15:42:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:43.249 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.249 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.249 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.249 15:42:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:43.249 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.249 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.249 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.249 15:42:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:43.249 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.249 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.249 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.249 15:42:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.249 15:42:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:43.249 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.249 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.249 Malloc3 00:18:43.249 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.249 15:42:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:43.249 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.249 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.249 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.249 15:42:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:43.249 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.249 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.249 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.249 15:42:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:43.249 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.249 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.249 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.249 15:42:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.249 15:42:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:43.249 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.249 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.249 Malloc4 00:18:43.249 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.249 15:42:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:43.249 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.249 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.249 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.249 15:42:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:43.249 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.249 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.249 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.249 15:42:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:43.249 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.249 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.508 15:42:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.508 15:42:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:43.508 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.508 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 Malloc5 00:18:43.508 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.508 15:42:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:43.508 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.508 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.508 15:42:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:43.508 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.508 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.508 15:42:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:43.508 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.508 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.508 15:42:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.508 15:42:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:43.508 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.508 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 Malloc6 00:18:43.508 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.508 15:42:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:43.508 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.508 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.508 15:42:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:43.508 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.508 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.508 15:42:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:43.508 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.508 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.508 15:42:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.508 15:42:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:43.508 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.508 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 Malloc7 00:18:43.508 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.508 15:42:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:43.508 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.508 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.508 15:42:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:43.508 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.508 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.508 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.509 15:42:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:43.509 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.509 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.509 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.509 15:42:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.509 15:42:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:43.509 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.509 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.509 Malloc8 00:18:43.509 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.509 15:42:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:43.509 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.509 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.509 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.509 15:42:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:43.509 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.509 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.509 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.509 15:42:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:43.509 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.509 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.509 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.509 15:42:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.509 15:42:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:43.509 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.509 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.509 Malloc9 00:18:43.509 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.509 15:42:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:43.509 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.509 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.509 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.509 15:42:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:43.509 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.509 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.509 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.509 15:42:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:43.509 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.509 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.509 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.509 15:42:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.509 15:42:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:43.509 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.509 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 Malloc10 00:18:43.767 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.767 15:42:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:43.767 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.767 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.767 15:42:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:43.767 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.767 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.767 15:42:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:43.767 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.767 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.767 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.767 15:42:22 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.767 15:42:22 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:43.768 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.768 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.768 Malloc11 00:18:43.768 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.768 15:42:22 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:43.768 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.768 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.768 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.768 15:42:22 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:43.768 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.768 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.768 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.768 15:42:22 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:43.768 15:42:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:43.768 15:42:22 -- common/autotest_common.sh@10 -- # set +x 00:18:43.768 15:42:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:43.768 15:42:22 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:43.768 15:42:22 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:43.768 15:42:22 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:44.335 15:42:23 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:44.335 15:42:23 -- common/autotest_common.sh@1177 -- # local i=0 00:18:44.335 15:42:23 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:44.335 15:42:23 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:44.335 15:42:23 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:46.312 15:42:25 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:46.312 15:42:25 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:46.312 15:42:25 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:18:46.312 15:42:25 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:46.312 15:42:25 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:46.312 15:42:25 -- common/autotest_common.sh@1187 -- # return 0 00:18:46.312 15:42:25 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:46.312 15:42:25 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:47.246 15:42:26 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:47.246 15:42:26 -- common/autotest_common.sh@1177 -- # local i=0 00:18:47.246 15:42:26 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.246 15:42:26 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:47.246 15:42:26 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:49.144 15:42:28 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:49.144 15:42:28 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:49.144 15:42:28 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:18:49.144 15:42:28 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:49.144 15:42:28 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:49.144 15:42:28 -- common/autotest_common.sh@1187 -- # return 0 00:18:49.144 15:42:28 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:49.144 15:42:28 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:50.078 15:42:29 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:50.078 15:42:29 -- common/autotest_common.sh@1177 -- # local i=0 00:18:50.078 15:42:29 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:50.078 15:42:29 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:50.078 15:42:29 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:51.979 15:42:31 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:51.979 15:42:31 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:51.979 15:42:31 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:18:51.979 15:42:31 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:51.979 15:42:31 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:51.979 15:42:31 -- common/autotest_common.sh@1187 -- # return 0 00:18:51.979 15:42:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:51.979 15:42:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:52.545 15:42:31 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:52.545 15:42:31 -- common/autotest_common.sh@1177 -- # local i=0 00:18:52.545 15:42:31 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:52.545 15:42:31 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:52.545 15:42:31 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:55.071 15:42:33 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:55.071 15:42:33 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:55.071 15:42:33 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:18:55.071 15:42:33 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:55.071 15:42:33 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:55.071 15:42:33 -- common/autotest_common.sh@1187 -- # return 0 00:18:55.071 15:42:33 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:55.071 15:42:33 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:55.329 15:42:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:55.329 15:42:34 -- common/autotest_common.sh@1177 -- # local i=0 00:18:55.329 15:42:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:55.329 15:42:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:55.329 15:42:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:57.227 15:42:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:57.484 15:42:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:57.484 15:42:36 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:18:57.484 15:42:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:57.484 15:42:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:57.484 15:42:36 -- common/autotest_common.sh@1187 -- # return 0 00:18:57.484 15:42:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:57.484 15:42:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:58.051 15:42:37 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:58.051 15:42:37 -- common/autotest_common.sh@1177 -- # local i=0 00:18:58.051 15:42:37 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:58.051 15:42:37 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:58.051 15:42:37 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:00.577 15:42:39 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:00.577 15:42:39 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:00.577 15:42:39 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:19:00.577 15:42:39 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:00.577 15:42:39 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:00.577 15:42:39 -- common/autotest_common.sh@1187 -- # return 0 00:19:00.577 15:42:39 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.577 15:42:39 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:19:00.835 15:42:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:19:00.835 15:42:40 -- common/autotest_common.sh@1177 -- # local i=0 00:19:00.835 15:42:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:00.835 15:42:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:00.835 15:42:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:03.362 15:42:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:03.362 15:42:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:03.362 15:42:42 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:19:03.362 15:42:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:03.362 15:42:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:03.362 15:42:42 -- common/autotest_common.sh@1187 -- # return 0 00:19:03.362 15:42:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:03.362 15:42:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:19:03.630 15:42:42 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:19:03.630 15:42:42 -- common/autotest_common.sh@1177 -- # local i=0 00:19:03.630 15:42:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:03.630 15:42:42 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:03.630 15:42:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:05.526 15:42:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:05.526 15:42:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:05.526 15:42:44 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:19:05.526 15:42:44 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:05.526 15:42:44 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:05.526 15:42:44 -- common/autotest_common.sh@1187 -- # return 0 00:19:05.526 15:42:44 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:05.526 15:42:44 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:19:06.898 15:42:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:19:06.898 15:42:45 -- common/autotest_common.sh@1177 -- # local i=0 00:19:06.898 15:42:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:06.898 15:42:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:06.898 15:42:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:08.797 15:42:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:08.797 15:42:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:08.797 15:42:47 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:19:08.797 15:42:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:08.797 15:42:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:08.797 15:42:47 -- common/autotest_common.sh@1187 -- # return 0 00:19:08.797 15:42:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:08.797 15:42:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:19:09.730 15:42:48 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:19:09.730 15:42:48 -- common/autotest_common.sh@1177 -- # local i=0 00:19:09.730 15:42:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:09.730 15:42:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:09.730 15:42:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:11.629 15:42:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:11.629 15:42:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:11.629 15:42:50 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:19:11.629 15:42:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:11.629 15:42:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:11.629 15:42:50 -- common/autotest_common.sh@1187 -- # return 0 00:19:11.629 15:42:50 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:11.630 15:42:50 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:19:12.565 15:42:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:19:12.565 15:42:51 -- common/autotest_common.sh@1177 -- # local i=0 00:19:12.565 15:42:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:12.565 15:42:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:12.565 15:42:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:14.462 15:42:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:14.462 15:42:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:14.462 15:42:53 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:19:14.462 15:42:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:14.462 15:42:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:14.462 15:42:53 -- common/autotest_common.sh@1187 -- # return 0 00:19:14.462 15:42:53 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:14.462 [global] 00:19:14.462 thread=1 00:19:14.462 invalidate=1 00:19:14.462 rw=read 00:19:14.462 time_based=1 00:19:14.462 runtime=10 00:19:14.462 ioengine=libaio 00:19:14.462 direct=1 00:19:14.462 bs=262144 00:19:14.462 iodepth=64 00:19:14.462 norandommap=1 00:19:14.462 numjobs=1 00:19:14.462 00:19:14.462 [job0] 00:19:14.462 filename=/dev/nvme0n1 00:19:14.462 [job1] 00:19:14.462 filename=/dev/nvme10n1 00:19:14.462 [job2] 00:19:14.462 filename=/dev/nvme1n1 00:19:14.462 [job3] 00:19:14.462 filename=/dev/nvme2n1 00:19:14.462 [job4] 00:19:14.462 filename=/dev/nvme3n1 00:19:14.462 [job5] 00:19:14.462 filename=/dev/nvme4n1 00:19:14.462 [job6] 00:19:14.462 filename=/dev/nvme5n1 00:19:14.462 [job7] 00:19:14.462 filename=/dev/nvme6n1 00:19:14.462 [job8] 00:19:14.462 filename=/dev/nvme7n1 00:19:14.462 [job9] 00:19:14.462 filename=/dev/nvme8n1 00:19:14.462 [job10] 00:19:14.462 filename=/dev/nvme9n1 00:19:14.462 Could not set queue depth (nvme0n1) 00:19:14.462 Could not set queue depth (nvme10n1) 00:19:14.462 Could not set queue depth (nvme1n1) 00:19:14.462 Could not set queue depth (nvme2n1) 00:19:14.462 Could not set queue depth (nvme3n1) 00:19:14.462 Could not set queue depth (nvme4n1) 00:19:14.462 Could not set queue depth (nvme5n1) 00:19:14.462 Could not set queue depth (nvme6n1) 00:19:14.462 Could not set queue depth (nvme7n1) 00:19:14.462 Could not set queue depth (nvme8n1) 00:19:14.462 Could not set queue depth (nvme9n1) 00:19:14.720 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.720 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.720 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.720 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.720 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.720 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.720 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.720 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.720 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.720 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.720 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:14.720 fio-3.35 00:19:14.720 Starting 11 threads 00:19:26.933 00:19:26.933 job0: (groupid=0, jobs=1): err= 0: pid=2153030: Wed Jul 10 15:43:04 2024 00:19:26.933 read: IOPS=768, BW=192MiB/s (202MB/s)(1941MiB/10100msec) 00:19:26.933 slat (usec): min=14, max=60979, avg=1267.89, stdev=3809.01 00:19:26.933 clat (msec): min=5, max=223, avg=81.91, stdev=44.54 00:19:26.933 lat (msec): min=5, max=227, avg=83.18, stdev=45.25 00:19:26.933 clat percentiles (msec): 00:19:26.933 | 1.00th=[ 27], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 42], 00:19:26.933 | 30.00th=[ 50], 40.00th=[ 57], 50.00th=[ 66], 60.00th=[ 82], 00:19:26.933 | 70.00th=[ 102], 80.00th=[ 127], 90.00th=[ 157], 95.00th=[ 169], 00:19:26.933 | 99.00th=[ 184], 99.50th=[ 194], 99.90th=[ 209], 99.95th=[ 209], 00:19:26.933 | 99.99th=[ 224] 00:19:26.933 bw ( KiB/s): min=89600, max=365056, per=10.93%, avg=197140.35, stdev=95022.71, samples=20 00:19:26.933 iops : min= 350, max= 1426, avg=770.05, stdev=371.15, samples=20 00:19:26.933 lat (msec) : 10=0.24%, 20=0.24%, 50=30.11%, 100=39.11%, 250=30.29% 00:19:26.933 cpu : usr=0.49%, sys=2.80%, ctx=1645, majf=0, minf=4097 00:19:26.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:26.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.933 issued rwts: total=7765,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.933 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.933 job1: (groupid=0, jobs=1): err= 0: pid=2153031: Wed Jul 10 15:43:04 2024 00:19:26.933 read: IOPS=812, BW=203MiB/s (213MB/s)(2051MiB/10103msec) 00:19:26.933 slat (usec): min=9, max=130317, avg=902.01, stdev=4709.08 00:19:26.933 clat (usec): min=1864, max=316741, avg=77848.09, stdev=48293.07 00:19:26.933 lat (msec): min=3, max=319, avg=78.75, stdev=49.06 00:19:26.933 clat percentiles (msec): 00:19:26.933 | 1.00th=[ 7], 5.00th=[ 14], 10.00th=[ 24], 20.00th=[ 36], 00:19:26.933 | 30.00th=[ 45], 40.00th=[ 56], 50.00th=[ 67], 60.00th=[ 80], 00:19:26.933 | 70.00th=[ 99], 80.00th=[ 120], 90.00th=[ 161], 95.00th=[ 169], 00:19:26.933 | 99.00th=[ 186], 99.50th=[ 197], 99.90th=[ 236], 99.95th=[ 257], 00:19:26.933 | 99.99th=[ 317] 00:19:26.933 bw ( KiB/s): min=105472, max=326656, per=11.55%, avg=208379.90, stdev=72029.71, samples=20 00:19:26.933 iops : min= 412, max= 1276, avg=813.95, stdev=281.32, samples=20 00:19:26.933 lat (msec) : 2=0.01%, 4=0.04%, 10=3.27%, 20=4.92%, 50=25.61% 00:19:26.933 lat (msec) : 100=37.41%, 250=28.68%, 500=0.06% 00:19:26.933 cpu : usr=0.36%, sys=2.72%, ctx=2048, majf=0, minf=4097 00:19:26.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:26.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.933 issued rwts: total=8204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.933 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.933 job2: (groupid=0, jobs=1): err= 0: pid=2153036: Wed Jul 10 15:43:04 2024 00:19:26.933 read: IOPS=589, BW=147MiB/s (155MB/s)(1479MiB/10029msec) 00:19:26.933 slat (usec): min=9, max=116729, avg=1256.87, stdev=5187.24 00:19:26.933 clat (msec): min=4, max=313, avg=107.15, stdev=55.58 00:19:26.933 lat (msec): min=4, max=342, avg=108.41, stdev=56.18 00:19:26.933 clat percentiles (msec): 00:19:26.933 | 1.00th=[ 11], 5.00th=[ 27], 10.00th=[ 52], 20.00th=[ 63], 00:19:26.933 | 30.00th=[ 71], 40.00th=[ 82], 50.00th=[ 94], 60.00th=[ 112], 00:19:26.933 | 70.00th=[ 131], 80.00th=[ 155], 90.00th=[ 188], 95.00th=[ 213], 00:19:26.933 | 99.00th=[ 253], 99.50th=[ 288], 99.90th=[ 313], 99.95th=[ 313], 00:19:26.933 | 99.99th=[ 313] 00:19:26.933 bw ( KiB/s): min=75776, max=272384, per=8.31%, avg=149855.45, stdev=54042.78, samples=20 00:19:26.933 iops : min= 296, max= 1064, avg=585.35, stdev=211.08, samples=20 00:19:26.933 lat (msec) : 10=0.79%, 20=3.09%, 50=4.97%, 100=44.38%, 250=45.48% 00:19:26.933 lat (msec) : 500=1.28% 00:19:26.933 cpu : usr=0.34%, sys=2.00%, ctx=1475, majf=0, minf=4097 00:19:26.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:26.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.933 issued rwts: total=5917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.933 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.933 job3: (groupid=0, jobs=1): err= 0: pid=2153037: Wed Jul 10 15:43:04 2024 00:19:26.933 read: IOPS=781, BW=195MiB/s (205MB/s)(1959MiB/10033msec) 00:19:26.933 slat (usec): min=9, max=119162, avg=535.25, stdev=3001.36 00:19:26.933 clat (usec): min=1096, max=289539, avg=81353.87, stdev=53559.35 00:19:26.933 lat (usec): min=1129, max=289555, avg=81889.13, stdev=53931.57 00:19:26.933 clat percentiles (msec): 00:19:26.933 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 22], 20.00th=[ 35], 00:19:26.933 | 30.00th=[ 47], 40.00th=[ 59], 50.00th=[ 70], 60.00th=[ 82], 00:19:26.933 | 70.00th=[ 99], 80.00th=[ 127], 90.00th=[ 163], 95.00th=[ 190], 00:19:26.933 | 99.00th=[ 222], 99.50th=[ 247], 99.90th=[ 279], 99.95th=[ 284], 00:19:26.933 | 99.99th=[ 292] 00:19:26.933 bw ( KiB/s): min=94019, max=399360, per=11.03%, avg=198979.35, stdev=81265.24, samples=20 00:19:26.933 iops : min= 367, max= 1560, avg=777.25, stdev=317.46, samples=20 00:19:26.933 lat (msec) : 2=0.15%, 4=0.36%, 10=2.12%, 20=6.06%, 50=24.13% 00:19:26.933 lat (msec) : 100=38.43%, 250=28.28%, 500=0.47% 00:19:26.933 cpu : usr=0.35%, sys=2.23%, ctx=2197, majf=0, minf=4097 00:19:26.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:26.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.933 issued rwts: total=7836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.933 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.933 job4: (groupid=0, jobs=1): err= 0: pid=2153038: Wed Jul 10 15:43:04 2024 00:19:26.933 read: IOPS=711, BW=178MiB/s (186MB/s)(1785MiB/10043msec) 00:19:26.933 slat (usec): min=9, max=112702, avg=870.92, stdev=4250.03 00:19:26.933 clat (usec): min=1382, max=371162, avg=89073.91, stdev=52840.25 00:19:26.933 lat (usec): min=1413, max=371205, avg=89944.83, stdev=53479.58 00:19:26.933 clat percentiles (msec): 00:19:26.933 | 1.00th=[ 9], 5.00th=[ 20], 10.00th=[ 31], 20.00th=[ 47], 00:19:26.933 | 30.00th=[ 57], 40.00th=[ 67], 50.00th=[ 80], 60.00th=[ 91], 00:19:26.933 | 70.00th=[ 109], 80.00th=[ 136], 90.00th=[ 165], 95.00th=[ 178], 00:19:26.933 | 99.00th=[ 241], 99.50th=[ 309], 99.90th=[ 368], 99.95th=[ 372], 00:19:26.933 | 99.99th=[ 372] 00:19:26.933 bw ( KiB/s): min=66048, max=291328, per=10.04%, avg=181178.80, stdev=64773.56, samples=20 00:19:26.933 iops : min= 258, max= 1138, avg=707.70, stdev=253.02, samples=20 00:19:26.933 lat (msec) : 2=0.07%, 4=0.20%, 10=1.05%, 20=3.85%, 50=18.60% 00:19:26.933 lat (msec) : 100=42.14%, 250=33.31%, 500=0.78% 00:19:26.933 cpu : usr=0.32%, sys=2.02%, ctx=1829, majf=0, minf=4097 00:19:26.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:26.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.933 issued rwts: total=7141,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.933 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.933 job5: (groupid=0, jobs=1): err= 0: pid=2153039: Wed Jul 10 15:43:04 2024 00:19:26.933 read: IOPS=476, BW=119MiB/s (125MB/s)(1194MiB/10027msec) 00:19:26.933 slat (usec): min=14, max=86635, avg=2087.56, stdev=6267.91 00:19:26.933 clat (msec): min=26, max=326, avg=132.19, stdev=49.42 00:19:26.933 lat (msec): min=26, max=327, avg=134.28, stdev=50.22 00:19:26.933 clat percentiles (msec): 00:19:26.933 | 1.00th=[ 48], 5.00th=[ 65], 10.00th=[ 72], 20.00th=[ 85], 00:19:26.933 | 30.00th=[ 99], 40.00th=[ 111], 50.00th=[ 127], 60.00th=[ 144], 00:19:26.933 | 70.00th=[ 161], 80.00th=[ 178], 90.00th=[ 203], 95.00th=[ 222], 00:19:26.933 | 99.00th=[ 251], 99.50th=[ 264], 99.90th=[ 288], 99.95th=[ 288], 00:19:26.933 | 99.99th=[ 326] 00:19:26.933 bw ( KiB/s): min=67072, max=195584, per=6.69%, avg=120660.60, stdev=39552.10, samples=20 00:19:26.933 iops : min= 262, max= 764, avg=471.30, stdev=154.53, samples=20 00:19:26.933 lat (msec) : 50=1.28%, 100=30.34%, 250=67.32%, 500=1.07% 00:19:26.933 cpu : usr=0.26%, sys=1.87%, ctx=1001, majf=0, minf=4097 00:19:26.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:26.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.933 issued rwts: total=4776,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.933 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.933 job6: (groupid=0, jobs=1): err= 0: pid=2153040: Wed Jul 10 15:43:04 2024 00:19:26.933 read: IOPS=579, BW=145MiB/s (152MB/s)(1463MiB/10103msec) 00:19:26.933 slat (usec): min=9, max=86399, avg=1203.30, stdev=4509.85 00:19:26.933 clat (msec): min=7, max=293, avg=109.20, stdev=53.57 00:19:26.933 lat (msec): min=7, max=293, avg=110.40, stdev=54.29 00:19:26.933 clat percentiles (msec): 00:19:26.933 | 1.00th=[ 26], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 57], 00:19:26.933 | 30.00th=[ 70], 40.00th=[ 83], 50.00th=[ 100], 60.00th=[ 124], 00:19:26.933 | 70.00th=[ 142], 80.00th=[ 161], 90.00th=[ 184], 95.00th=[ 205], 00:19:26.933 | 99.00th=[ 236], 99.50th=[ 257], 99.90th=[ 284], 99.95th=[ 296], 00:19:26.933 | 99.99th=[ 296] 00:19:26.933 bw ( KiB/s): min=83800, max=331264, per=8.21%, avg=148190.00, stdev=65031.68, samples=20 00:19:26.933 iops : min= 327, max= 1294, avg=578.85, stdev=254.05, samples=20 00:19:26.933 lat (msec) : 10=0.07%, 20=0.72%, 50=14.39%, 100=34.86%, 250=49.37% 00:19:26.933 lat (msec) : 500=0.60% 00:19:26.933 cpu : usr=0.31%, sys=1.94%, ctx=1532, majf=0, minf=4097 00:19:26.933 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:26.933 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.933 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.933 issued rwts: total=5852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.933 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.933 job7: (groupid=0, jobs=1): err= 0: pid=2153041: Wed Jul 10 15:43:04 2024 00:19:26.933 read: IOPS=722, BW=181MiB/s (189MB/s)(1814MiB/10043msec) 00:19:26.933 slat (usec): min=10, max=127207, avg=1051.05, stdev=4430.56 00:19:26.933 clat (msec): min=2, max=266, avg=87.50, stdev=52.02 00:19:26.933 lat (msec): min=2, max=289, avg=88.55, stdev=52.66 00:19:26.933 clat percentiles (msec): 00:19:26.933 | 1.00th=[ 11], 5.00th=[ 28], 10.00th=[ 33], 20.00th=[ 37], 00:19:26.933 | 30.00th=[ 46], 40.00th=[ 62], 50.00th=[ 77], 60.00th=[ 91], 00:19:26.933 | 70.00th=[ 116], 80.00th=[ 142], 90.00th=[ 165], 95.00th=[ 180], 00:19:26.933 | 99.00th=[ 220], 99.50th=[ 234], 99.90th=[ 253], 99.95th=[ 255], 00:19:26.933 | 99.99th=[ 268] 00:19:26.933 bw ( KiB/s): min=91648, max=438784, per=10.20%, avg=184050.75, stdev=93774.65, samples=20 00:19:26.934 iops : min= 358, max= 1714, avg=718.90, stdev=366.20, samples=20 00:19:26.934 lat (msec) : 4=0.17%, 10=0.74%, 20=2.21%, 50=29.71%, 100=31.49% 00:19:26.934 lat (msec) : 250=35.53%, 500=0.17% 00:19:26.934 cpu : usr=0.37%, sys=2.45%, ctx=1733, majf=0, minf=4097 00:19:26.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:26.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.934 issued rwts: total=7254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.934 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.934 job8: (groupid=0, jobs=1): err= 0: pid=2153044: Wed Jul 10 15:43:04 2024 00:19:26.934 read: IOPS=595, BW=149MiB/s (156MB/s)(1505MiB/10104msec) 00:19:26.934 slat (usec): min=10, max=134454, avg=1153.35, stdev=5965.22 00:19:26.934 clat (msec): min=2, max=305, avg=106.25, stdev=55.34 00:19:26.934 lat (msec): min=2, max=328, avg=107.40, stdev=56.21 00:19:26.934 clat percentiles (msec): 00:19:26.934 | 1.00th=[ 10], 5.00th=[ 24], 10.00th=[ 37], 20.00th=[ 56], 00:19:26.934 | 30.00th=[ 70], 40.00th=[ 82], 50.00th=[ 99], 60.00th=[ 123], 00:19:26.934 | 70.00th=[ 142], 80.00th=[ 161], 90.00th=[ 180], 95.00th=[ 199], 00:19:26.934 | 99.00th=[ 228], 99.50th=[ 239], 99.90th=[ 271], 99.95th=[ 292], 00:19:26.934 | 99.99th=[ 305] 00:19:26.934 bw ( KiB/s): min=84992, max=312832, per=8.45%, avg=152424.00, stdev=58748.61, samples=20 00:19:26.934 iops : min= 332, max= 1222, avg=595.35, stdev=229.52, samples=20 00:19:26.934 lat (msec) : 4=0.10%, 10=1.20%, 20=2.66%, 50=12.25%, 100=34.70% 00:19:26.934 lat (msec) : 250=48.75%, 500=0.35% 00:19:26.934 cpu : usr=0.34%, sys=2.01%, ctx=1660, majf=0, minf=4097 00:19:26.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:26.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.934 issued rwts: total=6018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.934 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.934 job9: (groupid=0, jobs=1): err= 0: pid=2153045: Wed Jul 10 15:43:04 2024 00:19:26.934 read: IOPS=446, BW=112MiB/s (117MB/s)(1127MiB/10098msec) 00:19:26.934 slat (usec): min=10, max=55757, avg=1931.47, stdev=5740.15 00:19:26.934 clat (msec): min=16, max=298, avg=141.36, stdev=50.55 00:19:26.934 lat (msec): min=16, max=298, avg=143.29, stdev=51.50 00:19:26.934 clat percentiles (msec): 00:19:26.934 | 1.00th=[ 22], 5.00th=[ 47], 10.00th=[ 66], 20.00th=[ 96], 00:19:26.934 | 30.00th=[ 123], 40.00th=[ 138], 50.00th=[ 153], 60.00th=[ 161], 00:19:26.934 | 70.00th=[ 169], 80.00th=[ 180], 90.00th=[ 197], 95.00th=[ 220], 00:19:26.934 | 99.00th=[ 253], 99.50th=[ 262], 99.90th=[ 288], 99.95th=[ 288], 00:19:26.934 | 99.99th=[ 300] 00:19:26.934 bw ( KiB/s): min=78179, max=196608, per=6.30%, avg=113732.95, stdev=32216.63, samples=20 00:19:26.934 iops : min= 305, max= 768, avg=444.25, stdev=125.87, samples=20 00:19:26.934 lat (msec) : 20=0.78%, 50=5.70%, 100=15.07%, 250=77.14%, 500=1.31% 00:19:26.934 cpu : usr=0.23%, sys=1.77%, ctx=1167, majf=0, minf=4097 00:19:26.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:26.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.934 issued rwts: total=4506,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.934 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.934 job10: (groupid=0, jobs=1): err= 0: pid=2153046: Wed Jul 10 15:43:04 2024 00:19:26.934 read: IOPS=589, BW=147MiB/s (155MB/s)(1482MiB/10055msec) 00:19:26.934 slat (usec): min=9, max=161650, avg=833.51, stdev=4975.13 00:19:26.934 clat (msec): min=2, max=383, avg=107.61, stdev=61.31 00:19:26.934 lat (msec): min=2, max=383, avg=108.44, stdev=61.78 00:19:26.934 clat percentiles (msec): 00:19:26.934 | 1.00th=[ 13], 5.00th=[ 26], 10.00th=[ 42], 20.00th=[ 60], 00:19:26.934 | 30.00th=[ 70], 40.00th=[ 81], 50.00th=[ 93], 60.00th=[ 110], 00:19:26.934 | 70.00th=[ 134], 80.00th=[ 161], 90.00th=[ 184], 95.00th=[ 222], 00:19:26.934 | 99.00th=[ 326], 99.50th=[ 363], 99.90th=[ 376], 99.95th=[ 384], 00:19:26.934 | 99.99th=[ 384] 00:19:26.934 bw ( KiB/s): min=65536, max=254464, per=8.32%, avg=150158.35, stdev=50082.26, samples=20 00:19:26.934 iops : min= 256, max= 994, avg=586.55, stdev=195.64, samples=20 00:19:26.934 lat (msec) : 4=0.08%, 10=0.71%, 20=2.55%, 50=11.57%, 100=39.65% 00:19:26.934 lat (msec) : 250=42.99%, 500=2.45% 00:19:26.934 cpu : usr=0.18%, sys=1.95%, ctx=1814, majf=0, minf=3721 00:19:26.934 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:19:26.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:26.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:26.934 issued rwts: total=5929,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:26.934 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:26.934 00:19:26.934 Run status group 0 (all jobs): 00:19:26.934 READ: bw=1762MiB/s (1847MB/s), 112MiB/s-203MiB/s (117MB/s-213MB/s), io=17.4GiB (18.7GB), run=10027-10104msec 00:19:26.934 00:19:26.934 Disk stats (read/write): 00:19:26.934 nvme0n1: ios=15340/0, merge=0/0, ticks=1227298/0, in_queue=1227298, util=97.09% 00:19:26.934 nvme10n1: ios=16191/0, merge=0/0, ticks=1234655/0, in_queue=1234655, util=97.33% 00:19:26.934 nvme1n1: ios=11548/0, merge=0/0, ticks=1238450/0, in_queue=1238450, util=97.61% 00:19:26.934 nvme2n1: ios=15333/0, merge=0/0, ticks=1245576/0, in_queue=1245576, util=97.76% 00:19:26.934 nvme3n1: ios=14045/0, merge=0/0, ticks=1238684/0, in_queue=1238684, util=97.84% 00:19:26.934 nvme4n1: ios=9305/0, merge=0/0, ticks=1226206/0, in_queue=1226206, util=98.17% 00:19:26.934 nvme5n1: ios=11488/0, merge=0/0, ticks=1232939/0, in_queue=1232939, util=98.34% 00:19:26.934 nvme6n1: ios=14264/0, merge=0/0, ticks=1234583/0, in_queue=1234583, util=98.44% 00:19:26.934 nvme7n1: ios=11801/0, merge=0/0, ticks=1230221/0, in_queue=1230221, util=98.87% 00:19:26.934 nvme8n1: ios=8810/0, merge=0/0, ticks=1223902/0, in_queue=1223902, util=99.09% 00:19:26.934 nvme9n1: ios=11646/0, merge=0/0, ticks=1242249/0, in_queue=1242249, util=99.23% 00:19:26.934 15:43:04 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:26.934 [global] 00:19:26.934 thread=1 00:19:26.934 invalidate=1 00:19:26.934 rw=randwrite 00:19:26.934 time_based=1 00:19:26.934 runtime=10 00:19:26.934 ioengine=libaio 00:19:26.934 direct=1 00:19:26.934 bs=262144 00:19:26.934 iodepth=64 00:19:26.934 norandommap=1 00:19:26.934 numjobs=1 00:19:26.934 00:19:26.934 [job0] 00:19:26.934 filename=/dev/nvme0n1 00:19:26.934 [job1] 00:19:26.934 filename=/dev/nvme10n1 00:19:26.934 [job2] 00:19:26.934 filename=/dev/nvme1n1 00:19:26.934 [job3] 00:19:26.934 filename=/dev/nvme2n1 00:19:26.934 [job4] 00:19:26.934 filename=/dev/nvme3n1 00:19:26.934 [job5] 00:19:26.934 filename=/dev/nvme4n1 00:19:26.934 [job6] 00:19:26.934 filename=/dev/nvme5n1 00:19:26.934 [job7] 00:19:26.934 filename=/dev/nvme6n1 00:19:26.934 [job8] 00:19:26.934 filename=/dev/nvme7n1 00:19:26.934 [job9] 00:19:26.934 filename=/dev/nvme8n1 00:19:26.934 [job10] 00:19:26.934 filename=/dev/nvme9n1 00:19:26.934 Could not set queue depth (nvme0n1) 00:19:26.934 Could not set queue depth (nvme10n1) 00:19:26.934 Could not set queue depth (nvme1n1) 00:19:26.934 Could not set queue depth (nvme2n1) 00:19:26.934 Could not set queue depth (nvme3n1) 00:19:26.934 Could not set queue depth (nvme4n1) 00:19:26.934 Could not set queue depth (nvme5n1) 00:19:26.934 Could not set queue depth (nvme6n1) 00:19:26.934 Could not set queue depth (nvme7n1) 00:19:26.934 Could not set queue depth (nvme8n1) 00:19:26.934 Could not set queue depth (nvme9n1) 00:19:26.934 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.934 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.934 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.934 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.934 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.934 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.934 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.934 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.934 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.934 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.934 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:26.934 fio-3.35 00:19:26.934 Starting 11 threads 00:19:36.971 00:19:36.971 job0: (groupid=0, jobs=1): err= 0: pid=2154095: Wed Jul 10 15:43:15 2024 00:19:36.971 write: IOPS=552, BW=138MiB/s (145MB/s)(1397MiB/10113msec); 0 zone resets 00:19:36.971 slat (usec): min=23, max=80520, avg=1384.51, stdev=3940.04 00:19:36.971 clat (usec): min=1620, max=430943, avg=114362.06, stdev=80005.77 00:19:36.971 lat (usec): min=1664, max=435548, avg=115746.57, stdev=81107.79 00:19:36.971 clat percentiles (msec): 00:19:36.971 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 31], 20.00th=[ 53], 00:19:36.971 | 30.00th=[ 67], 40.00th=[ 77], 50.00th=[ 86], 60.00th=[ 111], 00:19:36.971 | 70.00th=[ 146], 80.00th=[ 169], 90.00th=[ 215], 95.00th=[ 284], 00:19:36.971 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 422], 99.95th=[ 430], 00:19:36.971 | 99.99th=[ 430] 00:19:36.971 bw ( KiB/s): min=44966, max=255488, per=10.51%, avg=141463.80, stdev=68141.61, samples=20 00:19:36.971 iops : min= 175, max= 998, avg=552.50, stdev=266.23, samples=20 00:19:36.971 lat (msec) : 2=0.07%, 4=0.27%, 10=1.34%, 20=4.08%, 50=12.18% 00:19:36.971 lat (msec) : 100=36.80%, 250=37.13%, 500=8.12% 00:19:36.971 cpu : usr=1.94%, sys=1.99%, ctx=2906, majf=0, minf=1 00:19:36.971 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:36.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.971 issued rwts: total=0,5589,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.971 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.971 job1: (groupid=0, jobs=1): err= 0: pid=2154107: Wed Jul 10 15:43:15 2024 00:19:36.971 write: IOPS=810, BW=203MiB/s (213MB/s)(2050MiB/10113msec); 0 zone resets 00:19:36.971 slat (usec): min=18, max=132004, avg=844.17, stdev=3149.43 00:19:36.971 clat (usec): min=1470, max=387867, avg=78041.80, stdev=51718.59 00:19:36.971 lat (usec): min=1527, max=387897, avg=78885.97, stdev=52058.44 00:19:36.971 clat percentiles (msec): 00:19:36.971 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 28], 20.00th=[ 43], 00:19:36.971 | 30.00th=[ 51], 40.00th=[ 58], 50.00th=[ 70], 60.00th=[ 77], 00:19:36.971 | 70.00th=[ 89], 80.00th=[ 109], 90.00th=[ 144], 95.00th=[ 182], 00:19:36.971 | 99.00th=[ 284], 99.50th=[ 330], 99.90th=[ 368], 99.95th=[ 376], 00:19:36.971 | 99.99th=[ 388] 00:19:36.971 bw ( KiB/s): min=97792, max=335360, per=15.47%, avg=208251.70, stdev=65545.13, samples=20 00:19:36.971 iops : min= 382, max= 1310, avg=813.45, stdev=255.99, samples=20 00:19:36.971 lat (msec) : 2=0.11%, 4=0.84%, 10=2.17%, 20=3.88%, 50=22.33% 00:19:36.971 lat (msec) : 100=46.35%, 250=22.89%, 500=1.43% 00:19:36.971 cpu : usr=2.37%, sys=2.75%, ctx=4237, majf=0, minf=1 00:19:36.971 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:36.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.971 issued rwts: total=0,8199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.971 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.971 job2: (groupid=0, jobs=1): err= 0: pid=2154108: Wed Jul 10 15:43:15 2024 00:19:36.971 write: IOPS=428, BW=107MiB/s (112MB/s)(1090MiB/10162msec); 0 zone resets 00:19:36.971 slat (usec): min=18, max=162704, avg=1563.49, stdev=6168.67 00:19:36.971 clat (usec): min=1330, max=445471, avg=147603.11, stdev=106815.03 00:19:36.971 lat (usec): min=1393, max=445544, avg=149166.60, stdev=108002.34 00:19:36.971 clat percentiles (msec): 00:19:36.971 | 1.00th=[ 4], 5.00th=[ 12], 10.00th=[ 24], 20.00th=[ 57], 00:19:36.971 | 30.00th=[ 75], 40.00th=[ 83], 50.00th=[ 117], 60.00th=[ 153], 00:19:36.971 | 70.00th=[ 207], 80.00th=[ 251], 90.00th=[ 305], 95.00th=[ 355], 00:19:36.971 | 99.00th=[ 422], 99.50th=[ 435], 99.90th=[ 443], 99.95th=[ 447], 00:19:36.971 | 99.99th=[ 447] 00:19:36.971 bw ( KiB/s): min=34816, max=235520, per=8.17%, avg=109929.05, stdev=55183.22, samples=20 00:19:36.971 iops : min= 136, max= 920, avg=429.35, stdev=215.54, samples=20 00:19:36.971 lat (msec) : 2=0.16%, 4=0.94%, 10=3.30%, 20=4.80%, 50=8.24% 00:19:36.971 lat (msec) : 100=25.59%, 250=36.85%, 500=20.12% 00:19:36.971 cpu : usr=1.30%, sys=1.69%, ctx=2621, majf=0, minf=1 00:19:36.971 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:36.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.971 issued rwts: total=0,4358,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.971 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.971 job3: (groupid=0, jobs=1): err= 0: pid=2154109: Wed Jul 10 15:43:15 2024 00:19:36.971 write: IOPS=320, BW=80.2MiB/s (84.1MB/s)(815MiB/10168msec); 0 zone resets 00:19:36.971 slat (usec): min=26, max=62282, avg=2753.14, stdev=5937.72 00:19:36.971 clat (msec): min=4, max=407, avg=196.71, stdev=79.40 00:19:36.971 lat (msec): min=4, max=407, avg=199.47, stdev=80.45 00:19:36.971 clat percentiles (msec): 00:19:36.971 | 1.00th=[ 24], 5.00th=[ 69], 10.00th=[ 81], 20.00th=[ 129], 00:19:36.971 | 30.00th=[ 157], 40.00th=[ 178], 50.00th=[ 207], 60.00th=[ 222], 00:19:36.971 | 70.00th=[ 243], 80.00th=[ 262], 90.00th=[ 284], 95.00th=[ 334], 00:19:36.971 | 99.00th=[ 393], 99.50th=[ 401], 99.90th=[ 405], 99.95th=[ 409], 00:19:36.971 | 99.99th=[ 409] 00:19:36.971 bw ( KiB/s): min=45056, max=197632, per=6.08%, avg=81831.90, stdev=37194.89, samples=20 00:19:36.971 iops : min= 176, max= 772, avg=319.60, stdev=145.33, samples=20 00:19:36.971 lat (msec) : 10=0.15%, 20=0.55%, 50=1.90%, 100=11.96%, 250=60.53% 00:19:36.971 lat (msec) : 500=24.90% 00:19:36.971 cpu : usr=0.95%, sys=1.17%, ctx=1189, majf=0, minf=1 00:19:36.971 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:19:36.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.972 issued rwts: total=0,3261,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.972 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.972 job4: (groupid=0, jobs=1): err= 0: pid=2154111: Wed Jul 10 15:43:15 2024 00:19:36.972 write: IOPS=555, BW=139MiB/s (146MB/s)(1412MiB/10164msec); 0 zone resets 00:19:36.972 slat (usec): min=21, max=228264, avg=1359.19, stdev=4921.64 00:19:36.972 clat (msec): min=2, max=473, avg=113.70, stdev=75.64 00:19:36.972 lat (msec): min=2, max=473, avg=115.06, stdev=76.45 00:19:36.972 clat percentiles (msec): 00:19:36.972 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 44], 20.00th=[ 54], 00:19:36.972 | 30.00th=[ 74], 40.00th=[ 92], 50.00th=[ 106], 60.00th=[ 111], 00:19:36.972 | 70.00th=[ 123], 80.00th=[ 150], 90.00th=[ 192], 95.00th=[ 292], 00:19:36.972 | 99.00th=[ 380], 99.50th=[ 418], 99.90th=[ 456], 99.95th=[ 472], 00:19:36.972 | 99.99th=[ 472] 00:19:36.972 bw ( KiB/s): min=49664, max=287232, per=10.62%, avg=142985.35, stdev=51788.96, samples=20 00:19:36.972 iops : min= 194, max= 1122, avg=558.45, stdev=202.27, samples=20 00:19:36.972 lat (msec) : 4=0.19%, 10=1.13%, 20=2.64%, 50=10.21%, 100=29.79% 00:19:36.972 lat (msec) : 250=49.67%, 500=6.36% 00:19:36.972 cpu : usr=1.87%, sys=2.16%, ctx=2580, majf=0, minf=1 00:19:36.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:36.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.972 issued rwts: total=0,5649,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.972 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.972 job5: (groupid=0, jobs=1): err= 0: pid=2154112: Wed Jul 10 15:43:15 2024 00:19:36.972 write: IOPS=510, BW=128MiB/s (134MB/s)(1284MiB/10068msec); 0 zone resets 00:19:36.972 slat (usec): min=22, max=187467, avg=1361.72, stdev=5173.15 00:19:36.972 clat (msec): min=2, max=446, avg=124.07, stdev=94.05 00:19:36.972 lat (msec): min=2, max=452, avg=125.43, stdev=95.22 00:19:36.972 clat percentiles (msec): 00:19:36.972 | 1.00th=[ 8], 5.00th=[ 17], 10.00th=[ 30], 20.00th=[ 51], 00:19:36.972 | 30.00th=[ 71], 40.00th=[ 81], 50.00th=[ 92], 60.00th=[ 111], 00:19:36.972 | 70.00th=[ 136], 80.00th=[ 209], 90.00th=[ 275], 95.00th=[ 326], 00:19:36.972 | 99.00th=[ 393], 99.50th=[ 401], 99.90th=[ 422], 99.95th=[ 439], 00:19:36.972 | 99.99th=[ 447] 00:19:36.972 bw ( KiB/s): min=42922, max=220160, per=9.65%, avg=129848.30, stdev=53411.98, samples=20 00:19:36.972 iops : min= 167, max= 860, avg=507.15, stdev=208.73, samples=20 00:19:36.972 lat (msec) : 4=0.14%, 10=1.58%, 20=4.75%, 50=13.61%, 100=32.58% 00:19:36.972 lat (msec) : 250=34.20%, 500=13.15% 00:19:36.972 cpu : usr=1.60%, sys=1.96%, ctx=3027, majf=0, minf=1 00:19:36.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:19:36.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.972 issued rwts: total=0,5135,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.972 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.972 job6: (groupid=0, jobs=1): err= 0: pid=2154113: Wed Jul 10 15:43:15 2024 00:19:36.972 write: IOPS=370, BW=92.6MiB/s (97.1MB/s)(942MiB/10175msec); 0 zone resets 00:19:36.972 slat (usec): min=24, max=68113, avg=2370.58, stdev=5530.89 00:19:36.972 clat (msec): min=3, max=465, avg=170.41, stdev=82.33 00:19:36.972 lat (msec): min=4, max=465, avg=172.78, stdev=83.47 00:19:36.972 clat percentiles (msec): 00:19:36.972 | 1.00th=[ 17], 5.00th=[ 58], 10.00th=[ 85], 20.00th=[ 92], 00:19:36.972 | 30.00th=[ 122], 40.00th=[ 148], 50.00th=[ 163], 60.00th=[ 180], 00:19:36.972 | 70.00th=[ 199], 80.00th=[ 228], 90.00th=[ 288], 95.00th=[ 326], 00:19:36.972 | 99.00th=[ 409], 99.50th=[ 447], 99.90th=[ 464], 99.95th=[ 464], 00:19:36.972 | 99.99th=[ 464] 00:19:36.972 bw ( KiB/s): min=38912, max=189440, per=7.04%, avg=94803.20, stdev=42776.09, samples=20 00:19:36.972 iops : min= 152, max= 740, avg=370.30, stdev=167.09, samples=20 00:19:36.972 lat (msec) : 4=0.03%, 10=0.16%, 20=1.30%, 50=2.76%, 100=20.07% 00:19:36.972 lat (msec) : 250=59.04%, 500=16.64% 00:19:36.972 cpu : usr=1.30%, sys=1.29%, ctx=1486, majf=0, minf=1 00:19:36.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:19:36.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.972 issued rwts: total=0,3767,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.972 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.972 job7: (groupid=0, jobs=1): err= 0: pid=2154114: Wed Jul 10 15:43:15 2024 00:19:36.972 write: IOPS=433, BW=108MiB/s (114MB/s)(1098MiB/10131msec); 0 zone resets 00:19:36.972 slat (usec): min=21, max=79037, avg=1835.50, stdev=4242.71 00:19:36.972 clat (msec): min=3, max=496, avg=145.78, stdev=79.63 00:19:36.972 lat (msec): min=3, max=497, avg=147.61, stdev=80.30 00:19:36.972 clat percentiles (msec): 00:19:36.972 | 1.00th=[ 12], 5.00th=[ 30], 10.00th=[ 58], 20.00th=[ 85], 00:19:36.972 | 30.00th=[ 93], 40.00th=[ 114], 50.00th=[ 132], 60.00th=[ 155], 00:19:36.972 | 70.00th=[ 180], 80.00th=[ 201], 90.00th=[ 251], 95.00th=[ 313], 00:19:36.972 | 99.00th=[ 368], 99.50th=[ 401], 99.90th=[ 489], 99.95th=[ 489], 00:19:36.972 | 99.99th=[ 498] 00:19:36.972 bw ( KiB/s): min=61050, max=185856, per=8.23%, avg=110752.55, stdev=39912.50, samples=20 00:19:36.972 iops : min= 238, max= 726, avg=432.55, stdev=155.90, samples=20 00:19:36.972 lat (msec) : 4=0.05%, 10=0.62%, 20=2.30%, 50=5.54%, 100=24.01% 00:19:36.972 lat (msec) : 250=57.47%, 500=10.02% 00:19:36.972 cpu : usr=1.45%, sys=1.63%, ctx=1796, majf=0, minf=1 00:19:36.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:36.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.972 issued rwts: total=0,4390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.972 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.972 job8: (groupid=0, jobs=1): err= 0: pid=2154115: Wed Jul 10 15:43:15 2024 00:19:36.972 write: IOPS=433, BW=108MiB/s (114MB/s)(1096MiB/10112msec); 0 zone resets 00:19:36.972 slat (usec): min=23, max=117825, avg=1968.42, stdev=5396.95 00:19:36.972 clat (msec): min=2, max=458, avg=145.58, stdev=86.28 00:19:36.972 lat (msec): min=2, max=458, avg=147.55, stdev=87.43 00:19:36.972 clat percentiles (msec): 00:19:36.972 | 1.00th=[ 10], 5.00th=[ 39], 10.00th=[ 61], 20.00th=[ 80], 00:19:36.972 | 30.00th=[ 86], 40.00th=[ 102], 50.00th=[ 123], 60.00th=[ 148], 00:19:36.972 | 70.00th=[ 184], 80.00th=[ 222], 90.00th=[ 259], 95.00th=[ 300], 00:19:36.972 | 99.00th=[ 426], 99.50th=[ 443], 99.90th=[ 456], 99.95th=[ 460], 00:19:36.972 | 99.99th=[ 460] 00:19:36.972 bw ( KiB/s): min=40960, max=246272, per=8.22%, avg=110599.75, stdev=58965.69, samples=20 00:19:36.972 iops : min= 160, max= 962, avg=432.00, stdev=230.35, samples=20 00:19:36.972 lat (msec) : 4=0.05%, 10=1.03%, 20=1.62%, 50=3.76%, 100=33.30% 00:19:36.972 lat (msec) : 250=47.97%, 500=12.27% 00:19:36.972 cpu : usr=1.43%, sys=1.40%, ctx=1818, majf=0, minf=1 00:19:36.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:19:36.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.972 issued rwts: total=0,4384,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.972 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.972 job9: (groupid=0, jobs=1): err= 0: pid=2154116: Wed Jul 10 15:43:15 2024 00:19:36.972 write: IOPS=465, BW=116MiB/s (122MB/s)(1182MiB/10154msec); 0 zone resets 00:19:36.972 slat (usec): min=16, max=144013, avg=1375.40, stdev=6899.95 00:19:36.972 clat (usec): min=1663, max=498855, avg=135998.16, stdev=122467.88 00:19:36.972 lat (usec): min=1788, max=498903, avg=137373.57, stdev=124020.43 00:19:36.972 clat percentiles (msec): 00:19:36.972 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 15], 20.00th=[ 29], 00:19:36.972 | 30.00th=[ 40], 40.00th=[ 50], 50.00th=[ 79], 60.00th=[ 142], 00:19:36.973 | 70.00th=[ 218], 80.00th=[ 266], 90.00th=[ 317], 95.00th=[ 363], 00:19:36.973 | 99.00th=[ 435], 99.50th=[ 451], 99.90th=[ 493], 99.95th=[ 498], 00:19:36.973 | 99.99th=[ 498] 00:19:36.973 bw ( KiB/s): min=38912, max=272896, per=8.87%, avg=119364.40, stdev=79703.72, samples=20 00:19:36.973 iops : min= 152, max= 1066, avg=466.20, stdev=311.27, samples=20 00:19:36.973 lat (msec) : 2=0.04%, 4=0.59%, 10=5.71%, 20=8.74%, 50=25.66% 00:19:36.973 lat (msec) : 100=14.22%, 250=22.13%, 500=22.91% 00:19:36.973 cpu : usr=1.76%, sys=1.78%, ctx=3361, majf=0, minf=1 00:19:36.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:36.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.973 issued rwts: total=0,4727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.973 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.973 job10: (groupid=0, jobs=1): err= 0: pid=2154117: Wed Jul 10 15:43:15 2024 00:19:36.973 write: IOPS=400, BW=100MiB/s (105MB/s)(1008MiB/10068msec); 0 zone resets 00:19:36.973 slat (usec): min=21, max=97748, avg=2166.89, stdev=5580.15 00:19:36.973 clat (msec): min=4, max=421, avg=157.52, stdev=92.38 00:19:36.973 lat (msec): min=4, max=421, avg=159.68, stdev=93.67 00:19:36.973 clat percentiles (msec): 00:19:36.973 | 1.00th=[ 14], 5.00th=[ 35], 10.00th=[ 50], 20.00th=[ 73], 00:19:36.973 | 30.00th=[ 89], 40.00th=[ 120], 50.00th=[ 136], 60.00th=[ 165], 00:19:36.973 | 70.00th=[ 213], 80.00th=[ 249], 90.00th=[ 284], 95.00th=[ 317], 00:19:36.973 | 99.00th=[ 397], 99.50th=[ 405], 99.90th=[ 422], 99.95th=[ 422], 00:19:36.973 | 99.99th=[ 422] 00:19:36.973 bw ( KiB/s): min=42922, max=213504, per=7.55%, avg=101621.15, stdev=51606.32, samples=20 00:19:36.973 iops : min= 167, max= 834, avg=396.90, stdev=201.65, samples=20 00:19:36.973 lat (msec) : 10=0.37%, 20=1.64%, 50=8.08%, 100=24.70%, 250=45.43% 00:19:36.973 lat (msec) : 500=19.79% 00:19:36.973 cpu : usr=1.37%, sys=1.23%, ctx=1635, majf=0, minf=1 00:19:36.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:36.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:36.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:36.973 issued rwts: total=0,4033,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:36.973 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:36.973 00:19:36.973 Run status group 0 (all jobs): 00:19:36.973 WRITE: bw=1314MiB/s (1378MB/s), 80.2MiB/s-203MiB/s (84.1MB/s-213MB/s), io=13.1GiB (14.0GB), run=10068-10175msec 00:19:36.973 00:19:36.973 Disk stats (read/write): 00:19:36.973 nvme0n1: ios=49/10818, merge=0/0, ticks=282/1215762, in_queue=1216044, util=99.41% 00:19:36.973 nvme10n1: ios=46/16145, merge=0/0, ticks=2794/1205063, in_queue=1207857, util=100.00% 00:19:36.973 nvme1n1: ios=0/8693, merge=0/0, ticks=0/1245823, in_queue=1245823, util=97.34% 00:19:36.973 nvme2n1: ios=15/6487, merge=0/0, ticks=430/1234194, in_queue=1234624, util=98.06% 00:19:36.973 nvme3n1: ios=46/11281, merge=0/0, ticks=1831/1236905, in_queue=1238736, util=100.00% 00:19:36.973 nvme4n1: ios=0/9977, merge=0/0, ticks=0/1219113, in_queue=1219113, util=97.95% 00:19:36.973 nvme5n1: ios=0/7501, merge=0/0, ticks=0/1234179, in_queue=1234179, util=98.20% 00:19:36.973 nvme6n1: ios=0/8571, merge=0/0, ticks=0/1206101, in_queue=1206101, util=98.28% 00:19:36.973 nvme7n1: ios=0/8506, merge=0/0, ticks=0/1211673, in_queue=1211673, util=98.72% 00:19:36.973 nvme8n1: ios=41/9448, merge=0/0, ticks=4025/1234601, in_queue=1238626, util=100.00% 00:19:36.973 nvme9n1: ios=46/7775, merge=0/0, ticks=1106/1210595, in_queue=1211701, util=100.00% 00:19:36.973 15:43:15 -- target/multiconnection.sh@36 -- # sync 00:19:36.973 15:43:15 -- target/multiconnection.sh@37 -- # seq 1 11 00:19:36.973 15:43:15 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.973 15:43:15 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:36.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:36.973 15:43:15 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:36.973 15:43:15 -- common/autotest_common.sh@1198 -- # local i=0 00:19:36.973 15:43:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:36.973 15:43:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:19:36.973 15:43:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:36.973 15:43:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:19:36.973 15:43:15 -- common/autotest_common.sh@1210 -- # return 0 00:19:36.973 15:43:15 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:36.973 15:43:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.973 15:43:15 -- common/autotest_common.sh@10 -- # set +x 00:19:36.973 15:43:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.973 15:43:15 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.973 15:43:15 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:36.973 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:36.973 15:43:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:36.973 15:43:16 -- common/autotest_common.sh@1198 -- # local i=0 00:19:36.973 15:43:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:36.973 15:43:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:19:36.973 15:43:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:36.973 15:43:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:19:36.973 15:43:16 -- common/autotest_common.sh@1210 -- # return 0 00:19:36.973 15:43:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:36.973 15:43:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:36.973 15:43:16 -- common/autotest_common.sh@10 -- # set +x 00:19:36.973 15:43:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:36.973 15:43:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:36.973 15:43:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:37.538 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:37.538 15:43:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:37.538 15:43:16 -- common/autotest_common.sh@1198 -- # local i=0 00:19:37.538 15:43:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:37.538 15:43:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:19:37.538 15:43:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:37.538 15:43:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:19:37.538 15:43:16 -- common/autotest_common.sh@1210 -- # return 0 00:19:37.538 15:43:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:37.538 15:43:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:37.538 15:43:16 -- common/autotest_common.sh@10 -- # set +x 00:19:37.538 15:43:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:37.538 15:43:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:37.538 15:43:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:37.795 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:37.795 15:43:16 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:37.795 15:43:16 -- common/autotest_common.sh@1198 -- # local i=0 00:19:37.795 15:43:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:37.795 15:43:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:19:37.795 15:43:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:37.795 15:43:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:19:37.795 15:43:16 -- common/autotest_common.sh@1210 -- # return 0 00:19:37.795 15:43:16 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:37.795 15:43:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:37.795 15:43:16 -- common/autotest_common.sh@10 -- # set +x 00:19:37.795 15:43:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:37.795 15:43:16 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:37.795 15:43:16 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:38.053 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:38.053 15:43:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:38.053 15:43:17 -- common/autotest_common.sh@1198 -- # local i=0 00:19:38.053 15:43:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:38.053 15:43:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:19:38.053 15:43:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:38.053 15:43:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:19:38.053 15:43:17 -- common/autotest_common.sh@1210 -- # return 0 00:19:38.053 15:43:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:38.053 15:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.053 15:43:17 -- common/autotest_common.sh@10 -- # set +x 00:19:38.053 15:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.053 15:43:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:38.053 15:43:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:38.053 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:38.053 15:43:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:38.053 15:43:17 -- common/autotest_common.sh@1198 -- # local i=0 00:19:38.053 15:43:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:38.053 15:43:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:19:38.053 15:43:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:38.053 15:43:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:19:38.311 15:43:17 -- common/autotest_common.sh@1210 -- # return 0 00:19:38.311 15:43:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:38.311 15:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.311 15:43:17 -- common/autotest_common.sh@10 -- # set +x 00:19:38.311 15:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.311 15:43:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:38.311 15:43:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:38.311 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:38.311 15:43:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:38.311 15:43:17 -- common/autotest_common.sh@1198 -- # local i=0 00:19:38.311 15:43:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:38.311 15:43:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:19:38.311 15:43:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:38.311 15:43:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:19:38.311 15:43:17 -- common/autotest_common.sh@1210 -- # return 0 00:19:38.311 15:43:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:38.311 15:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.311 15:43:17 -- common/autotest_common.sh@10 -- # set +x 00:19:38.311 15:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.311 15:43:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:38.312 15:43:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:38.312 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:38.312 15:43:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:38.312 15:43:17 -- common/autotest_common.sh@1198 -- # local i=0 00:19:38.312 15:43:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:38.312 15:43:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:19:38.312 15:43:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:38.312 15:43:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:19:38.312 15:43:17 -- common/autotest_common.sh@1210 -- # return 0 00:19:38.312 15:43:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:38.312 15:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.312 15:43:17 -- common/autotest_common.sh@10 -- # set +x 00:19:38.312 15:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.312 15:43:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:38.312 15:43:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:38.570 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:38.570 15:43:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:38.570 15:43:17 -- common/autotest_common.sh@1198 -- # local i=0 00:19:38.570 15:43:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:38.570 15:43:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:19:38.570 15:43:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:38.570 15:43:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:19:38.570 15:43:17 -- common/autotest_common.sh@1210 -- # return 0 00:19:38.570 15:43:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:38.570 15:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.570 15:43:17 -- common/autotest_common.sh@10 -- # set +x 00:19:38.570 15:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.570 15:43:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:38.570 15:43:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:38.570 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:38.570 15:43:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:38.570 15:43:17 -- common/autotest_common.sh@1198 -- # local i=0 00:19:38.570 15:43:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:38.570 15:43:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:19:38.570 15:43:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:38.570 15:43:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:19:38.570 15:43:17 -- common/autotest_common.sh@1210 -- # return 0 00:19:38.570 15:43:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:38.570 15:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.570 15:43:17 -- common/autotest_common.sh@10 -- # set +x 00:19:38.570 15:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.570 15:43:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:38.570 15:43:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:38.570 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:38.570 15:43:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:38.570 15:43:17 -- common/autotest_common.sh@1198 -- # local i=0 00:19:38.570 15:43:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:38.570 15:43:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:19:38.570 15:43:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:38.570 15:43:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:19:38.570 15:43:17 -- common/autotest_common.sh@1210 -- # return 0 00:19:38.570 15:43:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:38.570 15:43:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.570 15:43:17 -- common/autotest_common.sh@10 -- # set +x 00:19:38.570 15:43:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.570 15:43:17 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:38.570 15:43:17 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:38.570 15:43:17 -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:38.570 15:43:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:38.570 15:43:17 -- nvmf/common.sh@116 -- # sync 00:19:38.570 15:43:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:38.570 15:43:17 -- nvmf/common.sh@119 -- # set +e 00:19:38.570 15:43:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:38.570 15:43:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:38.570 rmmod nvme_tcp 00:19:38.828 rmmod nvme_fabrics 00:19:38.828 rmmod nvme_keyring 00:19:38.828 15:43:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:38.828 15:43:18 -- nvmf/common.sh@123 -- # set -e 00:19:38.828 15:43:18 -- nvmf/common.sh@124 -- # return 0 00:19:38.828 15:43:18 -- nvmf/common.sh@477 -- # '[' -n 2148516 ']' 00:19:38.828 15:43:18 -- nvmf/common.sh@478 -- # killprocess 2148516 00:19:38.828 15:43:18 -- common/autotest_common.sh@926 -- # '[' -z 2148516 ']' 00:19:38.828 15:43:18 -- common/autotest_common.sh@930 -- # kill -0 2148516 00:19:38.828 15:43:18 -- common/autotest_common.sh@931 -- # uname 00:19:38.828 15:43:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:38.828 15:43:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2148516 00:19:38.828 15:43:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:38.828 15:43:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:38.828 15:43:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2148516' 00:19:38.828 killing process with pid 2148516 00:19:38.828 15:43:18 -- common/autotest_common.sh@945 -- # kill 2148516 00:19:38.828 15:43:18 -- common/autotest_common.sh@950 -- # wait 2148516 00:19:39.395 15:43:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:39.395 15:43:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:39.395 15:43:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:39.395 15:43:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:39.395 15:43:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:39.395 15:43:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:39.395 15:43:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:39.395 15:43:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.297 15:43:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:41.297 00:19:41.297 real 1m1.336s 00:19:41.297 user 3m23.037s 00:19:41.297 sys 0m25.449s 00:19:41.297 15:43:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.297 15:43:20 -- common/autotest_common.sh@10 -- # set +x 00:19:41.297 ************************************ 00:19:41.297 END TEST nvmf_multiconnection 00:19:41.297 ************************************ 00:19:41.297 15:43:20 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:41.297 15:43:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:41.297 15:43:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:41.297 15:43:20 -- common/autotest_common.sh@10 -- # set +x 00:19:41.297 ************************************ 00:19:41.297 START TEST nvmf_initiator_timeout 00:19:41.297 ************************************ 00:19:41.297 15:43:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:41.556 * Looking for test storage... 00:19:41.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:41.556 15:43:20 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.556 15:43:20 -- nvmf/common.sh@7 -- # uname -s 00:19:41.556 15:43:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.556 15:43:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.556 15:43:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.556 15:43:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.556 15:43:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.556 15:43:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.556 15:43:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.556 15:43:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.556 15:43:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.556 15:43:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.556 15:43:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.556 15:43:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.556 15:43:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.556 15:43:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.556 15:43:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:41.556 15:43:20 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:41.556 15:43:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.556 15:43:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.556 15:43:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.556 15:43:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.556 15:43:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.556 15:43:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.556 15:43:20 -- paths/export.sh@5 -- # export PATH 00:19:41.556 15:43:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.556 15:43:20 -- nvmf/common.sh@46 -- # : 0 00:19:41.556 15:43:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:41.556 15:43:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:41.556 15:43:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:41.556 15:43:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.556 15:43:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.556 15:43:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:41.556 15:43:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:41.556 15:43:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:41.556 15:43:20 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:41.556 15:43:20 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:41.556 15:43:20 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:41.556 15:43:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:41.556 15:43:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.557 15:43:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:41.557 15:43:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:41.557 15:43:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:41.557 15:43:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.557 15:43:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.557 15:43:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.557 15:43:20 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:41.557 15:43:20 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:41.557 15:43:20 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:41.557 15:43:20 -- common/autotest_common.sh@10 -- # set +x 00:19:43.512 15:43:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:43.512 15:43:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:43.512 15:43:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:43.512 15:43:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:43.512 15:43:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:43.512 15:43:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:43.512 15:43:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:43.512 15:43:22 -- nvmf/common.sh@294 -- # net_devs=() 00:19:43.512 15:43:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:43.512 15:43:22 -- nvmf/common.sh@295 -- # e810=() 00:19:43.512 15:43:22 -- nvmf/common.sh@295 -- # local -ga e810 00:19:43.512 15:43:22 -- nvmf/common.sh@296 -- # x722=() 00:19:43.512 15:43:22 -- nvmf/common.sh@296 -- # local -ga x722 00:19:43.512 15:43:22 -- nvmf/common.sh@297 -- # mlx=() 00:19:43.512 15:43:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:43.512 15:43:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:43.512 15:43:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:43.512 15:43:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:43.512 15:43:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:43.512 15:43:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:43.512 15:43:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:43.512 15:43:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:43.512 15:43:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:43.512 15:43:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:43.512 15:43:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:43.512 15:43:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:43.512 15:43:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:43.512 15:43:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:43.512 15:43:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:43.512 15:43:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:43.512 15:43:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:43.512 15:43:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:43.512 15:43:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:43.512 15:43:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:43.512 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:43.512 15:43:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:43.512 15:43:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:43.512 15:43:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.512 15:43:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.512 15:43:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:43.512 15:43:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:43.512 15:43:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:43.512 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:43.512 15:43:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:43.512 15:43:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:43.512 15:43:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.512 15:43:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.512 15:43:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:43.512 15:43:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:43.512 15:43:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:43.512 15:43:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:43.512 15:43:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:43.512 15:43:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.512 15:43:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:43.512 15:43:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.512 15:43:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:43.512 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:43.512 15:43:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.512 15:43:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:43.512 15:43:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.512 15:43:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:43.512 15:43:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.512 15:43:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:43.512 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:43.512 15:43:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.512 15:43:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:43.512 15:43:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:43.512 15:43:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:43.512 15:43:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:43.512 15:43:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:43.512 15:43:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:43.512 15:43:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:43.512 15:43:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:43.512 15:43:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:43.512 15:43:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:43.512 15:43:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:43.512 15:43:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:43.512 15:43:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:43.512 15:43:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:43.512 15:43:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:43.512 15:43:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:43.513 15:43:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:43.513 15:43:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:43.513 15:43:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:43.513 15:43:22 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:43.513 15:43:22 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:43.513 15:43:22 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:43.771 15:43:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:43.771 15:43:22 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:43.771 15:43:22 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:43.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:19:43.771 00:19:43.771 --- 10.0.0.2 ping statistics --- 00:19:43.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.771 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:19:43.771 15:43:22 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:43.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:19:43.771 00:19:43.771 --- 10.0.0.1 ping statistics --- 00:19:43.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.771 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:19:43.772 15:43:22 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.772 15:43:22 -- nvmf/common.sh@410 -- # return 0 00:19:43.772 15:43:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:43.772 15:43:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:43.772 15:43:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:43.772 15:43:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:43.772 15:43:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:43.772 15:43:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:43.772 15:43:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:43.772 15:43:22 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:43.772 15:43:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:43.772 15:43:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:43.772 15:43:22 -- common/autotest_common.sh@10 -- # set +x 00:19:43.772 15:43:22 -- nvmf/common.sh@469 -- # nvmfpid=2157531 00:19:43.772 15:43:22 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:43.772 15:43:22 -- nvmf/common.sh@470 -- # waitforlisten 2157531 00:19:43.772 15:43:22 -- common/autotest_common.sh@819 -- # '[' -z 2157531 ']' 00:19:43.772 15:43:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.772 15:43:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:43.772 15:43:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.772 15:43:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:43.772 15:43:22 -- common/autotest_common.sh@10 -- # set +x 00:19:43.772 [2024-07-10 15:43:22.989534] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:43.772 [2024-07-10 15:43:22.989612] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.772 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.772 [2024-07-10 15:43:23.057540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:44.029 [2024-07-10 15:43:23.166456] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:44.030 [2024-07-10 15:43:23.166627] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.030 [2024-07-10 15:43:23.166645] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.030 [2024-07-10 15:43:23.166658] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.030 [2024-07-10 15:43:23.166732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.030 [2024-07-10 15:43:23.166801] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.030 [2024-07-10 15:43:23.166849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:44.030 [2024-07-10 15:43:23.166852] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.963 15:43:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:44.963 15:43:23 -- common/autotest_common.sh@852 -- # return 0 00:19:44.963 15:43:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:44.963 15:43:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:44.963 15:43:23 -- common/autotest_common.sh@10 -- # set +x 00:19:44.963 15:43:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.963 15:43:24 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:44.963 15:43:24 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:44.963 15:43:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.963 15:43:24 -- common/autotest_common.sh@10 -- # set +x 00:19:44.963 Malloc0 00:19:44.963 15:43:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.963 15:43:24 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:44.963 15:43:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.963 15:43:24 -- common/autotest_common.sh@10 -- # set +x 00:19:44.963 Delay0 00:19:44.963 15:43:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.963 15:43:24 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:44.963 15:43:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.963 15:43:24 -- common/autotest_common.sh@10 -- # set +x 00:19:44.963 [2024-07-10 15:43:24.041269] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.963 15:43:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.963 15:43:24 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:44.963 15:43:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.963 15:43:24 -- common/autotest_common.sh@10 -- # set +x 00:19:44.963 15:43:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.963 15:43:24 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:44.963 15:43:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.963 15:43:24 -- common/autotest_common.sh@10 -- # set +x 00:19:44.963 15:43:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.963 15:43:24 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:44.963 15:43:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.963 15:43:24 -- common/autotest_common.sh@10 -- # set +x 00:19:44.963 [2024-07-10 15:43:24.069552] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.963 15:43:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.963 15:43:24 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:45.529 15:43:24 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:45.529 15:43:24 -- common/autotest_common.sh@1177 -- # local i=0 00:19:45.529 15:43:24 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:45.529 15:43:24 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:45.529 15:43:24 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:47.426 15:43:26 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:47.426 15:43:26 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:47.426 15:43:26 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:47.426 15:43:26 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:47.426 15:43:26 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:47.426 15:43:26 -- common/autotest_common.sh@1187 -- # return 0 00:19:47.426 15:43:26 -- target/initiator_timeout.sh@35 -- # fio_pid=2158087 00:19:47.426 15:43:26 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:47.426 15:43:26 -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:47.426 [global] 00:19:47.426 thread=1 00:19:47.426 invalidate=1 00:19:47.426 rw=write 00:19:47.426 time_based=1 00:19:47.426 runtime=60 00:19:47.426 ioengine=libaio 00:19:47.426 direct=1 00:19:47.426 bs=4096 00:19:47.426 iodepth=1 00:19:47.426 norandommap=0 00:19:47.426 numjobs=1 00:19:47.426 00:19:47.426 verify_dump=1 00:19:47.426 verify_backlog=512 00:19:47.426 verify_state_save=0 00:19:47.426 do_verify=1 00:19:47.426 verify=crc32c-intel 00:19:47.426 [job0] 00:19:47.426 filename=/dev/nvme0n1 00:19:47.426 Could not set queue depth (nvme0n1) 00:19:47.685 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:47.685 fio-3.35 00:19:47.685 Starting 1 thread 00:19:50.965 15:43:29 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:50.965 15:43:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.965 15:43:29 -- common/autotest_common.sh@10 -- # set +x 00:19:50.965 true 00:19:50.965 15:43:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:50.965 15:43:29 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:50.965 15:43:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.965 15:43:29 -- common/autotest_common.sh@10 -- # set +x 00:19:50.965 true 00:19:50.965 15:43:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:50.965 15:43:29 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:50.965 15:43:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.965 15:43:29 -- common/autotest_common.sh@10 -- # set +x 00:19:50.965 true 00:19:50.965 15:43:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:50.965 15:43:29 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:50.965 15:43:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:50.965 15:43:29 -- common/autotest_common.sh@10 -- # set +x 00:19:50.965 true 00:19:50.965 15:43:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:50.965 15:43:29 -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:53.491 15:43:32 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:53.491 15:43:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:53.491 15:43:32 -- common/autotest_common.sh@10 -- # set +x 00:19:53.491 true 00:19:53.491 15:43:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:53.492 15:43:32 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:53.492 15:43:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:53.492 15:43:32 -- common/autotest_common.sh@10 -- # set +x 00:19:53.492 true 00:19:53.492 15:43:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:53.492 15:43:32 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:53.492 15:43:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:53.492 15:43:32 -- common/autotest_common.sh@10 -- # set +x 00:19:53.492 true 00:19:53.492 15:43:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:53.492 15:43:32 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:53.492 15:43:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:53.492 15:43:32 -- common/autotest_common.sh@10 -- # set +x 00:19:53.492 true 00:19:53.492 15:43:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:53.492 15:43:32 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:53.492 15:43:32 -- target/initiator_timeout.sh@54 -- # wait 2158087 00:20:49.710 00:20:49.710 job0: (groupid=0, jobs=1): err= 0: pid=2158157: Wed Jul 10 15:44:27 2024 00:20:49.710 read: IOPS=7, BW=30.8KiB/s (31.5kB/s)(1848KiB/60039msec) 00:20:49.710 slat (usec): min=7, max=7860, avg=40.62, stdev=364.72 00:20:49.710 clat (usec): min=437, max=41136k, avg=129315.17, stdev=1911936.11 00:20:49.710 lat (usec): min=453, max=41136k, avg=129355.79, stdev=1911935.00 00:20:49.710 clat percentiles (usec): 00:20:49.710 | 1.00th=[ 510], 5.00th=[ 41157], 10.00th=[ 41157], 00:20:49.710 | 20.00th=[ 41157], 30.00th=[ 41157], 40.00th=[ 41157], 00:20:49.710 | 50.00th=[ 41157], 60.00th=[ 41157], 70.00th=[ 41157], 00:20:49.710 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:20:49.710 | 99.00th=[ 41681], 99.50th=[ 41681], 99.90th=[17112761], 00:20:49.710 | 99.95th=[17112761], 99.99th=[17112761] 00:20:49.710 write: IOPS=8, BW=34.1KiB/s (34.9kB/s)(2048KiB/60039msec); 0 zone resets 00:20:49.710 slat (usec): min=11, max=29071, avg=94.00, stdev=1283.17 00:20:49.710 clat (usec): min=309, max=524, avg=429.53, stdev=43.39 00:20:49.710 lat (usec): min=334, max=29500, avg=523.53, stdev=1284.02 00:20:49.710 clat percentiles (usec): 00:20:49.710 | 1.00th=[ 338], 5.00th=[ 355], 10.00th=[ 367], 20.00th=[ 388], 00:20:49.710 | 30.00th=[ 416], 40.00th=[ 429], 50.00th=[ 433], 60.00th=[ 445], 00:20:49.710 | 70.00th=[ 453], 80.00th=[ 465], 90.00th=[ 486], 95.00th=[ 498], 00:20:49.710 | 99.00th=[ 510], 99.50th=[ 515], 99.90th=[ 529], 99.95th=[ 529], 00:20:49.710 | 99.99th=[ 529] 00:20:49.710 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:20:49.710 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:20:49.710 lat (usec) : 500=50.41%, 750=2.87% 00:20:49.710 lat (msec) : 50=46.61%, >=2000=0.10% 00:20:49.710 cpu : usr=0.04%, sys=0.06%, ctx=977, majf=0, minf=2 00:20:49.710 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:49.710 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.710 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:49.710 issued rwts: total=462,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:49.710 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:49.710 00:20:49.710 Run status group 0 (all jobs): 00:20:49.710 READ: bw=30.8KiB/s (31.5kB/s), 30.8KiB/s-30.8KiB/s (31.5kB/s-31.5kB/s), io=1848KiB (1892kB), run=60039-60039msec 00:20:49.710 WRITE: bw=34.1KiB/s (34.9kB/s), 34.1KiB/s-34.1KiB/s (34.9kB/s-34.9kB/s), io=2048KiB (2097kB), run=60039-60039msec 00:20:49.710 00:20:49.710 Disk stats (read/write): 00:20:49.710 nvme0n1: ios=510/512, merge=0/0, ticks=19755/170, in_queue=19925, util=99.67% 00:20:49.710 15:44:27 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:49.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:49.710 15:44:27 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:49.710 15:44:27 -- common/autotest_common.sh@1198 -- # local i=0 00:20:49.710 15:44:27 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:20:49.710 15:44:27 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:49.710 15:44:27 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:20:49.711 15:44:27 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:49.711 15:44:27 -- common/autotest_common.sh@1210 -- # return 0 00:20:49.711 15:44:27 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:49.711 15:44:27 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:49.711 nvmf hotplug test: fio successful as expected 00:20:49.711 15:44:27 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:49.711 15:44:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:49.711 15:44:27 -- common/autotest_common.sh@10 -- # set +x 00:20:49.711 15:44:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:49.711 15:44:27 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:49.711 15:44:27 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:49.711 15:44:27 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:49.711 15:44:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:49.711 15:44:27 -- nvmf/common.sh@116 -- # sync 00:20:49.711 15:44:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:49.711 15:44:27 -- nvmf/common.sh@119 -- # set +e 00:20:49.711 15:44:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:49.711 15:44:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:49.711 rmmod nvme_tcp 00:20:49.711 rmmod nvme_fabrics 00:20:49.711 rmmod nvme_keyring 00:20:49.711 15:44:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:49.711 15:44:27 -- nvmf/common.sh@123 -- # set -e 00:20:49.711 15:44:27 -- nvmf/common.sh@124 -- # return 0 00:20:49.711 15:44:27 -- nvmf/common.sh@477 -- # '[' -n 2157531 ']' 00:20:49.711 15:44:27 -- nvmf/common.sh@478 -- # killprocess 2157531 00:20:49.711 15:44:27 -- common/autotest_common.sh@926 -- # '[' -z 2157531 ']' 00:20:49.711 15:44:27 -- common/autotest_common.sh@930 -- # kill -0 2157531 00:20:49.711 15:44:27 -- common/autotest_common.sh@931 -- # uname 00:20:49.711 15:44:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:49.711 15:44:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2157531 00:20:49.711 15:44:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:49.711 15:44:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:49.711 15:44:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2157531' 00:20:49.711 killing process with pid 2157531 00:20:49.711 15:44:27 -- common/autotest_common.sh@945 -- # kill 2157531 00:20:49.711 15:44:27 -- common/autotest_common.sh@950 -- # wait 2157531 00:20:49.711 15:44:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:49.711 15:44:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:49.711 15:44:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:49.711 15:44:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.711 15:44:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:49.711 15:44:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.711 15:44:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:49.711 15:44:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.278 15:44:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:20:50.278 00:20:50.278 real 1m8.947s 00:20:50.278 user 4m14.164s 00:20:50.278 sys 0m6.157s 00:20:50.278 15:44:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:50.278 15:44:29 -- common/autotest_common.sh@10 -- # set +x 00:20:50.278 ************************************ 00:20:50.278 END TEST nvmf_initiator_timeout 00:20:50.278 ************************************ 00:20:50.278 15:44:29 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:20:50.278 15:44:29 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:20:50.278 15:44:29 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:20:50.278 15:44:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:50.278 15:44:29 -- common/autotest_common.sh@10 -- # set +x 00:20:52.202 15:44:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:52.202 15:44:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:52.202 15:44:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:52.202 15:44:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:52.202 15:44:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:52.202 15:44:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:52.202 15:44:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:52.202 15:44:31 -- nvmf/common.sh@294 -- # net_devs=() 00:20:52.202 15:44:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:52.202 15:44:31 -- nvmf/common.sh@295 -- # e810=() 00:20:52.202 15:44:31 -- nvmf/common.sh@295 -- # local -ga e810 00:20:52.202 15:44:31 -- nvmf/common.sh@296 -- # x722=() 00:20:52.202 15:44:31 -- nvmf/common.sh@296 -- # local -ga x722 00:20:52.202 15:44:31 -- nvmf/common.sh@297 -- # mlx=() 00:20:52.202 15:44:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:52.202 15:44:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:52.202 15:44:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:52.202 15:44:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:52.202 15:44:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:52.202 15:44:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:52.202 15:44:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:52.202 15:44:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:52.202 15:44:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:52.202 15:44:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:52.202 15:44:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:52.202 15:44:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:52.202 15:44:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:52.202 15:44:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:52.202 15:44:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:52.202 15:44:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:52.202 15:44:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:52.202 15:44:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:52.202 15:44:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:52.202 15:44:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:52.202 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:52.202 15:44:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:52.202 15:44:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:52.202 15:44:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.202 15:44:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.202 15:44:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:52.202 15:44:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:52.202 15:44:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:52.202 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:52.202 15:44:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:52.202 15:44:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:52.202 15:44:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:52.202 15:44:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:52.202 15:44:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:52.202 15:44:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:52.202 15:44:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:52.202 15:44:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:52.202 15:44:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:52.202 15:44:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.202 15:44:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:52.202 15:44:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.202 15:44:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:52.202 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:52.202 15:44:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.202 15:44:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:52.202 15:44:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:52.202 15:44:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:52.202 15:44:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:52.202 15:44:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:52.202 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:52.202 15:44:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:52.202 15:44:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:52.202 15:44:31 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:52.202 15:44:31 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:20:52.202 15:44:31 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:52.202 15:44:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:52.202 15:44:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:52.202 15:44:31 -- common/autotest_common.sh@10 -- # set +x 00:20:52.202 ************************************ 00:20:52.202 START TEST nvmf_perf_adq 00:20:52.202 ************************************ 00:20:52.202 15:44:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:52.202 * Looking for test storage... 00:20:52.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:52.461 15:44:31 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:52.461 15:44:31 -- nvmf/common.sh@7 -- # uname -s 00:20:52.461 15:44:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:52.461 15:44:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:52.461 15:44:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:52.461 15:44:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:52.461 15:44:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:52.461 15:44:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:52.461 15:44:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:52.461 15:44:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:52.461 15:44:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:52.461 15:44:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:52.461 15:44:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.461 15:44:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.461 15:44:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:52.461 15:44:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:52.461 15:44:31 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:52.461 15:44:31 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:52.461 15:44:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:52.461 15:44:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:52.461 15:44:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:52.461 15:44:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.461 15:44:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.461 15:44:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.461 15:44:31 -- paths/export.sh@5 -- # export PATH 00:20:52.462 15:44:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:52.462 15:44:31 -- nvmf/common.sh@46 -- # : 0 00:20:52.462 15:44:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:52.462 15:44:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:52.462 15:44:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:52.462 15:44:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:52.462 15:44:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:52.462 15:44:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:52.462 15:44:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:52.462 15:44:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:52.462 15:44:31 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:52.462 15:44:31 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:52.462 15:44:31 -- common/autotest_common.sh@10 -- # set +x 00:20:54.364 15:44:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:54.364 15:44:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:54.364 15:44:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:54.364 15:44:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:54.364 15:44:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:54.364 15:44:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:54.364 15:44:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:54.364 15:44:33 -- nvmf/common.sh@294 -- # net_devs=() 00:20:54.364 15:44:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:54.364 15:44:33 -- nvmf/common.sh@295 -- # e810=() 00:20:54.364 15:44:33 -- nvmf/common.sh@295 -- # local -ga e810 00:20:54.364 15:44:33 -- nvmf/common.sh@296 -- # x722=() 00:20:54.364 15:44:33 -- nvmf/common.sh@296 -- # local -ga x722 00:20:54.364 15:44:33 -- nvmf/common.sh@297 -- # mlx=() 00:20:54.364 15:44:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:54.364 15:44:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:54.364 15:44:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:54.364 15:44:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:54.364 15:44:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:54.364 15:44:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:54.364 15:44:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:54.364 15:44:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:54.364 15:44:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:54.364 15:44:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:54.364 15:44:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:54.364 15:44:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:54.364 15:44:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:54.364 15:44:33 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:20:54.364 15:44:33 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:20:54.364 15:44:33 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:20:54.364 15:44:33 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:20:54.364 15:44:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:54.364 15:44:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:54.364 15:44:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:54.364 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:54.364 15:44:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:54.364 15:44:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:54.364 15:44:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.364 15:44:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.364 15:44:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:54.364 15:44:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:54.364 15:44:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:54.364 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:54.364 15:44:33 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:20:54.364 15:44:33 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:20:54.364 15:44:33 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:54.364 15:44:33 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:54.364 15:44:33 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:20:54.364 15:44:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:54.364 15:44:33 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:20:54.364 15:44:33 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:20:54.364 15:44:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:54.364 15:44:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.364 15:44:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:54.364 15:44:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.364 15:44:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:54.364 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:54.364 15:44:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.364 15:44:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:54.364 15:44:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:54.364 15:44:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:54.364 15:44:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:54.364 15:44:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:54.364 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:54.364 15:44:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:54.364 15:44:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:54.364 15:44:33 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:54.364 15:44:33 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:54.364 15:44:33 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:54.364 15:44:33 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:20:54.364 15:44:33 -- target/perf_adq.sh@52 -- # rmmod ice 00:20:54.931 15:44:34 -- target/perf_adq.sh@53 -- # modprobe ice 00:20:56.833 15:44:36 -- target/perf_adq.sh@54 -- # sleep 5 00:21:02.100 15:44:41 -- target/perf_adq.sh@67 -- # nvmftestinit 00:21:02.100 15:44:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:02.100 15:44:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.100 15:44:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:02.100 15:44:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:02.100 15:44:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:02.100 15:44:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.100 15:44:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.100 15:44:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.100 15:44:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:02.100 15:44:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:02.100 15:44:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:02.100 15:44:41 -- common/autotest_common.sh@10 -- # set +x 00:21:02.100 15:44:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:02.100 15:44:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:02.100 15:44:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:02.100 15:44:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:02.100 15:44:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:02.100 15:44:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:02.100 15:44:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:02.100 15:44:41 -- nvmf/common.sh@294 -- # net_devs=() 00:21:02.100 15:44:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:02.100 15:44:41 -- nvmf/common.sh@295 -- # e810=() 00:21:02.100 15:44:41 -- nvmf/common.sh@295 -- # local -ga e810 00:21:02.100 15:44:41 -- nvmf/common.sh@296 -- # x722=() 00:21:02.100 15:44:41 -- nvmf/common.sh@296 -- # local -ga x722 00:21:02.100 15:44:41 -- nvmf/common.sh@297 -- # mlx=() 00:21:02.100 15:44:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:02.100 15:44:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.100 15:44:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.100 15:44:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.100 15:44:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.100 15:44:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.100 15:44:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.100 15:44:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.100 15:44:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.100 15:44:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.100 15:44:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.100 15:44:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.100 15:44:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:02.100 15:44:41 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:02.100 15:44:41 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:02.100 15:44:41 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:02.100 15:44:41 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:02.100 15:44:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:02.100 15:44:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:02.100 15:44:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:02.100 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:02.100 15:44:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:02.100 15:44:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:02.100 15:44:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.100 15:44:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.100 15:44:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:02.100 15:44:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:02.100 15:44:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:02.100 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:02.100 15:44:41 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:02.100 15:44:41 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:02.100 15:44:41 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.100 15:44:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.100 15:44:41 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:02.100 15:44:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:02.100 15:44:41 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:02.100 15:44:41 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:02.100 15:44:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:02.100 15:44:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.100 15:44:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:02.100 15:44:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.100 15:44:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:02.100 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:02.100 15:44:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.100 15:44:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:02.100 15:44:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.100 15:44:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:02.100 15:44:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.100 15:44:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:02.100 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:02.100 15:44:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.100 15:44:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:02.100 15:44:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:02.100 15:44:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:02.100 15:44:41 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:02.100 15:44:41 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:02.100 15:44:41 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:02.100 15:44:41 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.100 15:44:41 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:02.100 15:44:41 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:02.100 15:44:41 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:02.100 15:44:41 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:02.100 15:44:41 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:02.100 15:44:41 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:02.100 15:44:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.100 15:44:41 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:02.100 15:44:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:02.100 15:44:41 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:02.100 15:44:41 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:02.100 15:44:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:02.100 15:44:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:02.100 15:44:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:02.100 15:44:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:02.100 15:44:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:02.100 15:44:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:02.100 15:44:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:02.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:21:02.100 00:21:02.100 --- 10.0.0.2 ping statistics --- 00:21:02.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.100 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:21:02.100 15:44:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:02.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:21:02.100 00:21:02.100 --- 10.0.0.1 ping statistics --- 00:21:02.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.100 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:21:02.100 15:44:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.100 15:44:41 -- nvmf/common.sh@410 -- # return 0 00:21:02.100 15:44:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:02.100 15:44:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.100 15:44:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:02.100 15:44:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:02.100 15:44:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.100 15:44:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:02.100 15:44:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:02.100 15:44:41 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:02.100 15:44:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:02.100 15:44:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:02.100 15:44:41 -- common/autotest_common.sh@10 -- # set +x 00:21:02.100 15:44:41 -- nvmf/common.sh@469 -- # nvmfpid=2169947 00:21:02.100 15:44:41 -- nvmf/common.sh@470 -- # waitforlisten 2169947 00:21:02.100 15:44:41 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:02.100 15:44:41 -- common/autotest_common.sh@819 -- # '[' -z 2169947 ']' 00:21:02.100 15:44:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.100 15:44:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:02.100 15:44:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.100 15:44:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:02.100 15:44:41 -- common/autotest_common.sh@10 -- # set +x 00:21:02.100 [2024-07-10 15:44:41.389874] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:02.100 [2024-07-10 15:44:41.389954] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.100 EAL: No free 2048 kB hugepages reported on node 1 00:21:02.100 [2024-07-10 15:44:41.455615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:02.358 [2024-07-10 15:44:41.571994] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:02.358 [2024-07-10 15:44:41.572153] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.358 [2024-07-10 15:44:41.572172] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.358 [2024-07-10 15:44:41.572186] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.358 [2024-07-10 15:44:41.572274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.358 [2024-07-10 15:44:41.572340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.358 [2024-07-10 15:44:41.572605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:02.358 [2024-07-10 15:44:41.572609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:03.288 15:44:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:03.288 15:44:42 -- common/autotest_common.sh@852 -- # return 0 00:21:03.288 15:44:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:03.288 15:44:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:03.288 15:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:03.288 15:44:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.288 15:44:42 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:21:03.288 15:44:42 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:21:03.288 15:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.288 15:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:03.288 15:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.288 15:44:42 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:21:03.288 15:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.288 15:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:03.288 15:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.288 15:44:42 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:21:03.288 15:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.288 15:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:03.288 [2024-07-10 15:44:42.509322] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.288 15:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.288 15:44:42 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:03.288 15:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.288 15:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:03.288 Malloc1 00:21:03.288 15:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.288 15:44:42 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:03.288 15:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.288 15:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:03.288 15:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.288 15:44:42 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:03.288 15:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.288 15:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:03.288 15:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.288 15:44:42 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:03.288 15:44:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:03.288 15:44:42 -- common/autotest_common.sh@10 -- # set +x 00:21:03.288 [2024-07-10 15:44:42.561695] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.288 15:44:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:03.288 15:44:42 -- target/perf_adq.sh@73 -- # perfpid=2170114 00:21:03.288 15:44:42 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:03.288 15:44:42 -- target/perf_adq.sh@74 -- # sleep 2 00:21:03.288 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.202 15:44:44 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:21:05.202 15:44:44 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:21:05.202 15:44:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:05.202 15:44:44 -- target/perf_adq.sh@76 -- # wc -l 00:21:05.202 15:44:44 -- common/autotest_common.sh@10 -- # set +x 00:21:05.462 15:44:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:05.462 15:44:44 -- target/perf_adq.sh@76 -- # count=4 00:21:05.462 15:44:44 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:21:05.462 15:44:44 -- target/perf_adq.sh@81 -- # wait 2170114 00:21:13.572 Initializing NVMe Controllers 00:21:13.572 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:13.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:13.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:13.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:13.572 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:13.572 Initialization complete. Launching workers. 00:21:13.572 ======================================================== 00:21:13.572 Latency(us) 00:21:13.572 Device Information : IOPS MiB/s Average min max 00:21:13.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11282.38 44.07 5673.40 934.68 9167.71 00:21:13.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11684.88 45.64 5478.20 1155.52 8922.10 00:21:13.572 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9536.59 37.25 6712.58 2550.00 10537.85 00:21:13.573 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11707.28 45.73 5468.34 1087.08 8171.61 00:21:13.573 ======================================================== 00:21:13.573 Total : 44211.13 172.70 5791.67 934.68 10537.85 00:21:13.573 00:21:13.573 15:44:52 -- target/perf_adq.sh@82 -- # nvmftestfini 00:21:13.573 15:44:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:13.573 15:44:52 -- nvmf/common.sh@116 -- # sync 00:21:13.573 15:44:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:13.573 15:44:52 -- nvmf/common.sh@119 -- # set +e 00:21:13.573 15:44:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:13.573 15:44:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:13.573 rmmod nvme_tcp 00:21:13.573 rmmod nvme_fabrics 00:21:13.573 rmmod nvme_keyring 00:21:13.573 15:44:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:13.573 15:44:52 -- nvmf/common.sh@123 -- # set -e 00:21:13.573 15:44:52 -- nvmf/common.sh@124 -- # return 0 00:21:13.573 15:44:52 -- nvmf/common.sh@477 -- # '[' -n 2169947 ']' 00:21:13.573 15:44:52 -- nvmf/common.sh@478 -- # killprocess 2169947 00:21:13.573 15:44:52 -- common/autotest_common.sh@926 -- # '[' -z 2169947 ']' 00:21:13.573 15:44:52 -- common/autotest_common.sh@930 -- # kill -0 2169947 00:21:13.573 15:44:52 -- common/autotest_common.sh@931 -- # uname 00:21:13.573 15:44:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:13.573 15:44:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2169947 00:21:13.573 15:44:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:13.573 15:44:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:13.573 15:44:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2169947' 00:21:13.573 killing process with pid 2169947 00:21:13.573 15:44:52 -- common/autotest_common.sh@945 -- # kill 2169947 00:21:13.573 15:44:52 -- common/autotest_common.sh@950 -- # wait 2169947 00:21:13.840 15:44:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:13.840 15:44:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:13.840 15:44:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:13.840 15:44:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:13.840 15:44:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:13.840 15:44:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.840 15:44:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.840 15:44:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:15.743 15:44:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:15.743 15:44:55 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:21:15.743 15:44:55 -- target/perf_adq.sh@52 -- # rmmod ice 00:21:16.307 15:44:55 -- target/perf_adq.sh@53 -- # modprobe ice 00:21:18.832 15:44:57 -- target/perf_adq.sh@54 -- # sleep 5 00:21:24.107 15:45:02 -- target/perf_adq.sh@87 -- # nvmftestinit 00:21:24.107 15:45:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:24.107 15:45:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.107 15:45:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:24.107 15:45:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:24.107 15:45:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:24.107 15:45:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.107 15:45:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:24.107 15:45:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.107 15:45:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:24.107 15:45:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:24.107 15:45:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:24.107 15:45:02 -- common/autotest_common.sh@10 -- # set +x 00:21:24.107 15:45:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:24.107 15:45:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:24.107 15:45:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:24.107 15:45:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:24.107 15:45:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:24.107 15:45:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:24.107 15:45:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:24.107 15:45:02 -- nvmf/common.sh@294 -- # net_devs=() 00:21:24.107 15:45:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:24.107 15:45:02 -- nvmf/common.sh@295 -- # e810=() 00:21:24.107 15:45:02 -- nvmf/common.sh@295 -- # local -ga e810 00:21:24.107 15:45:02 -- nvmf/common.sh@296 -- # x722=() 00:21:24.107 15:45:02 -- nvmf/common.sh@296 -- # local -ga x722 00:21:24.107 15:45:02 -- nvmf/common.sh@297 -- # mlx=() 00:21:24.107 15:45:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:24.107 15:45:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.107 15:45:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.107 15:45:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.107 15:45:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.107 15:45:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.107 15:45:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.107 15:45:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.107 15:45:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.107 15:45:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.107 15:45:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.107 15:45:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.107 15:45:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:24.107 15:45:02 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:24.107 15:45:02 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:24.107 15:45:02 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:24.107 15:45:02 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:24.107 15:45:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:24.107 15:45:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:24.107 15:45:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:24.107 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:24.107 15:45:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:24.107 15:45:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:24.107 15:45:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.107 15:45:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.107 15:45:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:24.107 15:45:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:24.107 15:45:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:24.107 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:24.107 15:45:02 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:24.107 15:45:02 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:24.107 15:45:02 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.107 15:45:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.107 15:45:02 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:24.107 15:45:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:24.107 15:45:02 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:24.107 15:45:02 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:24.107 15:45:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:24.107 15:45:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.107 15:45:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:24.107 15:45:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.107 15:45:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:24.107 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:24.107 15:45:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.107 15:45:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:24.107 15:45:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.107 15:45:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:24.107 15:45:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.107 15:45:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:24.107 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:24.107 15:45:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.107 15:45:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:24.107 15:45:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:24.107 15:45:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:24.107 15:45:02 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:24.107 15:45:02 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:24.107 15:45:02 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.107 15:45:02 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.107 15:45:02 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.107 15:45:02 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:24.107 15:45:02 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:24.107 15:45:02 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:24.107 15:45:02 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:24.107 15:45:02 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:24.107 15:45:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.107 15:45:02 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:24.107 15:45:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:24.107 15:45:02 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:24.107 15:45:02 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.107 15:45:02 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.107 15:45:02 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.107 15:45:02 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:24.107 15:45:02 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.107 15:45:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.107 15:45:02 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.107 15:45:02 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:24.107 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.107 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:21:24.107 00:21:24.107 --- 10.0.0.2 ping statistics --- 00:21:24.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.107 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:21:24.107 15:45:02 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.107 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.107 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:21:24.107 00:21:24.107 --- 10.0.0.1 ping statistics --- 00:21:24.107 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.108 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:21:24.108 15:45:02 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.108 15:45:02 -- nvmf/common.sh@410 -- # return 0 00:21:24.108 15:45:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:24.108 15:45:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.108 15:45:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:24.108 15:45:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:24.108 15:45:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.108 15:45:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:24.108 15:45:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:24.108 15:45:02 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:21:24.108 15:45:02 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:24.108 15:45:02 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:24.108 15:45:02 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:24.108 net.core.busy_poll = 1 00:21:24.108 15:45:02 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:24.108 net.core.busy_read = 1 00:21:24.108 15:45:02 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:24.108 15:45:02 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:24.108 15:45:02 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:24.108 15:45:02 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:24.108 15:45:02 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:24.108 15:45:02 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:24.108 15:45:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:24.108 15:45:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:24.108 15:45:02 -- common/autotest_common.sh@10 -- # set +x 00:21:24.108 15:45:02 -- nvmf/common.sh@469 -- # nvmfpid=2172924 00:21:24.108 15:45:02 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:24.108 15:45:02 -- nvmf/common.sh@470 -- # waitforlisten 2172924 00:21:24.108 15:45:02 -- common/autotest_common.sh@819 -- # '[' -z 2172924 ']' 00:21:24.108 15:45:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.108 15:45:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:24.108 15:45:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.108 15:45:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:24.108 15:45:02 -- common/autotest_common.sh@10 -- # set +x 00:21:24.108 [2024-07-10 15:45:02.954442] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:24.108 [2024-07-10 15:45:02.954526] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.108 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.108 [2024-07-10 15:45:03.018201] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:24.108 [2024-07-10 15:45:03.123228] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:24.108 [2024-07-10 15:45:03.123369] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.108 [2024-07-10 15:45:03.123387] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.108 [2024-07-10 15:45:03.123408] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.108 [2024-07-10 15:45:03.123467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:24.108 [2024-07-10 15:45:03.123505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.108 [2024-07-10 15:45:03.123526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:24.108 [2024-07-10 15:45:03.123528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.108 15:45:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:24.108 15:45:03 -- common/autotest_common.sh@852 -- # return 0 00:21:24.108 15:45:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:24.108 15:45:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:24.108 15:45:03 -- common/autotest_common.sh@10 -- # set +x 00:21:24.108 15:45:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:24.108 15:45:03 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:21:24.108 15:45:03 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:24.108 15:45:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:24.108 15:45:03 -- common/autotest_common.sh@10 -- # set +x 00:21:24.108 15:45:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:24.108 15:45:03 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:21:24.108 15:45:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:24.108 15:45:03 -- common/autotest_common.sh@10 -- # set +x 00:21:24.108 15:45:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:24.108 15:45:03 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:24.108 15:45:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:24.108 15:45:03 -- common/autotest_common.sh@10 -- # set +x 00:21:24.108 [2024-07-10 15:45:03.284210] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:24.108 15:45:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:24.108 15:45:03 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:24.108 15:45:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:24.108 15:45:03 -- common/autotest_common.sh@10 -- # set +x 00:21:24.108 Malloc1 00:21:24.108 15:45:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:24.108 15:45:03 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:24.108 15:45:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:24.108 15:45:03 -- common/autotest_common.sh@10 -- # set +x 00:21:24.108 15:45:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:24.108 15:45:03 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:24.108 15:45:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:24.108 15:45:03 -- common/autotest_common.sh@10 -- # set +x 00:21:24.108 15:45:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:24.108 15:45:03 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:24.108 15:45:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:24.108 15:45:03 -- common/autotest_common.sh@10 -- # set +x 00:21:24.108 [2024-07-10 15:45:03.336112] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:24.108 15:45:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:24.108 15:45:03 -- target/perf_adq.sh@94 -- # perfpid=2173031 00:21:24.108 15:45:03 -- target/perf_adq.sh@95 -- # sleep 2 00:21:24.108 15:45:03 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:24.108 EAL: No free 2048 kB hugepages reported on node 1 00:21:26.008 15:45:05 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:21:26.008 15:45:05 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:26.008 15:45:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:26.008 15:45:05 -- target/perf_adq.sh@97 -- # wc -l 00:21:26.008 15:45:05 -- common/autotest_common.sh@10 -- # set +x 00:21:26.008 15:45:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:26.267 15:45:05 -- target/perf_adq.sh@97 -- # count=2 00:21:26.267 15:45:05 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:21:26.267 15:45:05 -- target/perf_adq.sh@103 -- # wait 2173031 00:21:34.374 Initializing NVMe Controllers 00:21:34.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:34.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:34.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:34.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:34.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:34.374 Initialization complete. Launching workers. 00:21:34.374 ======================================================== 00:21:34.374 Latency(us) 00:21:34.374 Device Information : IOPS MiB/s Average min max 00:21:34.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6500.00 25.39 9849.18 2726.67 50728.83 00:21:34.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7355.90 28.73 8707.64 1395.15 52901.08 00:21:34.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7608.70 29.72 8419.24 1696.40 53973.57 00:21:34.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 8158.00 31.87 7846.49 1456.75 51079.38 00:21:34.374 ======================================================== 00:21:34.374 Total : 29622.60 115.71 8646.89 1395.15 53973.57 00:21:34.374 00:21:34.374 15:45:13 -- target/perf_adq.sh@104 -- # nvmftestfini 00:21:34.374 15:45:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:34.374 15:45:13 -- nvmf/common.sh@116 -- # sync 00:21:34.374 15:45:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:34.374 15:45:13 -- nvmf/common.sh@119 -- # set +e 00:21:34.374 15:45:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:34.374 15:45:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:34.374 rmmod nvme_tcp 00:21:34.374 rmmod nvme_fabrics 00:21:34.374 rmmod nvme_keyring 00:21:34.374 15:45:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:34.374 15:45:13 -- nvmf/common.sh@123 -- # set -e 00:21:34.374 15:45:13 -- nvmf/common.sh@124 -- # return 0 00:21:34.374 15:45:13 -- nvmf/common.sh@477 -- # '[' -n 2172924 ']' 00:21:34.374 15:45:13 -- nvmf/common.sh@478 -- # killprocess 2172924 00:21:34.374 15:45:13 -- common/autotest_common.sh@926 -- # '[' -z 2172924 ']' 00:21:34.374 15:45:13 -- common/autotest_common.sh@930 -- # kill -0 2172924 00:21:34.374 15:45:13 -- common/autotest_common.sh@931 -- # uname 00:21:34.374 15:45:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:34.375 15:45:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2172924 00:21:34.375 15:45:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:34.375 15:45:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:34.375 15:45:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2172924' 00:21:34.375 killing process with pid 2172924 00:21:34.375 15:45:13 -- common/autotest_common.sh@945 -- # kill 2172924 00:21:34.375 15:45:13 -- common/autotest_common.sh@950 -- # wait 2172924 00:21:34.633 15:45:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:34.633 15:45:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:34.633 15:45:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:34.633 15:45:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:34.633 15:45:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:34.633 15:45:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.633 15:45:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:34.633 15:45:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.629 15:45:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:36.629 15:45:15 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:21:36.629 00:21:36.629 real 0m44.395s 00:21:36.629 user 2m35.111s 00:21:36.629 sys 0m12.052s 00:21:36.629 15:45:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:36.629 15:45:15 -- common/autotest_common.sh@10 -- # set +x 00:21:36.629 ************************************ 00:21:36.629 END TEST nvmf_perf_adq 00:21:36.629 ************************************ 00:21:36.629 15:45:15 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:36.629 15:45:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:36.629 15:45:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:36.629 15:45:15 -- common/autotest_common.sh@10 -- # set +x 00:21:36.629 ************************************ 00:21:36.629 START TEST nvmf_shutdown 00:21:36.629 ************************************ 00:21:36.629 15:45:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:36.886 * Looking for test storage... 00:21:36.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:36.886 15:45:16 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:36.886 15:45:16 -- nvmf/common.sh@7 -- # uname -s 00:21:36.887 15:45:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:36.887 15:45:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:36.887 15:45:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:36.887 15:45:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:36.887 15:45:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:36.887 15:45:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:36.887 15:45:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:36.887 15:45:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:36.887 15:45:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:36.887 15:45:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:36.887 15:45:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.887 15:45:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.887 15:45:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:36.887 15:45:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:36.887 15:45:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:36.887 15:45:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:36.887 15:45:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:36.887 15:45:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:36.887 15:45:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:36.887 15:45:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.887 15:45:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.887 15:45:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.887 15:45:16 -- paths/export.sh@5 -- # export PATH 00:21:36.887 15:45:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:36.887 15:45:16 -- nvmf/common.sh@46 -- # : 0 00:21:36.887 15:45:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:36.887 15:45:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:36.887 15:45:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:36.887 15:45:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:36.887 15:45:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:36.887 15:45:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:36.887 15:45:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:36.887 15:45:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:36.887 15:45:16 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:36.887 15:45:16 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:36.887 15:45:16 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:36.887 15:45:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:36.887 15:45:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:36.887 15:45:16 -- common/autotest_common.sh@10 -- # set +x 00:21:36.887 ************************************ 00:21:36.887 START TEST nvmf_shutdown_tc1 00:21:36.887 ************************************ 00:21:36.887 15:45:16 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:21:36.887 15:45:16 -- target/shutdown.sh@74 -- # starttarget 00:21:36.887 15:45:16 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:36.887 15:45:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:36.887 15:45:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:36.887 15:45:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:36.887 15:45:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:36.887 15:45:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:36.887 15:45:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.887 15:45:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:36.887 15:45:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:36.887 15:45:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:36.887 15:45:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:36.887 15:45:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:36.887 15:45:16 -- common/autotest_common.sh@10 -- # set +x 00:21:38.786 15:45:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:38.786 15:45:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:38.786 15:45:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:38.786 15:45:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:38.786 15:45:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:38.786 15:45:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:38.786 15:45:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:38.786 15:45:18 -- nvmf/common.sh@294 -- # net_devs=() 00:21:38.786 15:45:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:38.786 15:45:18 -- nvmf/common.sh@295 -- # e810=() 00:21:38.786 15:45:18 -- nvmf/common.sh@295 -- # local -ga e810 00:21:38.786 15:45:18 -- nvmf/common.sh@296 -- # x722=() 00:21:38.786 15:45:18 -- nvmf/common.sh@296 -- # local -ga x722 00:21:38.786 15:45:18 -- nvmf/common.sh@297 -- # mlx=() 00:21:38.786 15:45:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:38.786 15:45:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:38.786 15:45:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:38.786 15:45:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:38.786 15:45:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:38.786 15:45:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:38.786 15:45:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:38.786 15:45:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:38.786 15:45:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:38.786 15:45:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:38.786 15:45:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:38.786 15:45:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:38.786 15:45:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:38.786 15:45:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:38.786 15:45:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:38.786 15:45:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:38.786 15:45:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:38.786 15:45:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:38.786 15:45:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:38.786 15:45:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:38.786 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:38.786 15:45:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:38.786 15:45:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:38.786 15:45:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.786 15:45:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.786 15:45:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:38.786 15:45:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:38.786 15:45:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:38.786 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:38.786 15:45:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:38.786 15:45:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:38.786 15:45:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:38.786 15:45:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:38.786 15:45:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:38.786 15:45:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:38.786 15:45:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:38.786 15:45:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:38.786 15:45:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:38.786 15:45:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.786 15:45:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:38.786 15:45:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.786 15:45:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:38.786 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:38.786 15:45:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.786 15:45:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:38.786 15:45:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:38.786 15:45:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:38.786 15:45:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:38.786 15:45:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:38.786 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:38.786 15:45:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:38.786 15:45:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:38.786 15:45:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:38.786 15:45:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:38.786 15:45:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:38.786 15:45:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:38.786 15:45:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:38.786 15:45:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:38.786 15:45:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:38.786 15:45:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:38.786 15:45:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:38.786 15:45:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:38.786 15:45:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:38.786 15:45:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:38.786 15:45:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:38.786 15:45:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:38.786 15:45:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:39.045 15:45:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:39.045 15:45:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:39.045 15:45:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:39.045 15:45:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:39.045 15:45:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:39.045 15:45:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:39.045 15:45:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:39.045 15:45:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:39.045 15:45:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:39.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:39.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:21:39.045 00:21:39.045 --- 10.0.0.2 ping statistics --- 00:21:39.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.045 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:21:39.045 15:45:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:39.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:39.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:21:39.045 00:21:39.045 --- 10.0.0.1 ping statistics --- 00:21:39.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:39.045 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:21:39.045 15:45:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:39.045 15:45:18 -- nvmf/common.sh@410 -- # return 0 00:21:39.045 15:45:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:39.045 15:45:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:39.045 15:45:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:39.045 15:45:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:39.045 15:45:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:39.045 15:45:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:39.045 15:45:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:39.045 15:45:18 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:39.045 15:45:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:39.045 15:45:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:39.045 15:45:18 -- common/autotest_common.sh@10 -- # set +x 00:21:39.045 15:45:18 -- nvmf/common.sh@469 -- # nvmfpid=2176787 00:21:39.045 15:45:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:39.045 15:45:18 -- nvmf/common.sh@470 -- # waitforlisten 2176787 00:21:39.045 15:45:18 -- common/autotest_common.sh@819 -- # '[' -z 2176787 ']' 00:21:39.045 15:45:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:39.045 15:45:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:39.045 15:45:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:39.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:39.045 15:45:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:39.045 15:45:18 -- common/autotest_common.sh@10 -- # set +x 00:21:39.045 [2024-07-10 15:45:18.364627] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:39.045 [2024-07-10 15:45:18.364701] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:39.045 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.302 [2024-07-10 15:45:18.433210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:39.302 [2024-07-10 15:45:18.548443] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:39.302 [2024-07-10 15:45:18.548602] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:39.302 [2024-07-10 15:45:18.548632] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:39.302 [2024-07-10 15:45:18.548655] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:39.302 [2024-07-10 15:45:18.548778] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.302 [2024-07-10 15:45:18.548857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:39.302 [2024-07-10 15:45:18.548924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:39.302 [2024-07-10 15:45:18.548929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.231 15:45:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:40.231 15:45:19 -- common/autotest_common.sh@852 -- # return 0 00:21:40.231 15:45:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:40.231 15:45:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:40.231 15:45:19 -- common/autotest_common.sh@10 -- # set +x 00:21:40.231 15:45:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:40.231 15:45:19 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:40.231 15:45:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.231 15:45:19 -- common/autotest_common.sh@10 -- # set +x 00:21:40.231 [2024-07-10 15:45:19.373033] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.231 15:45:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.231 15:45:19 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:40.231 15:45:19 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:40.231 15:45:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:40.231 15:45:19 -- common/autotest_common.sh@10 -- # set +x 00:21:40.231 15:45:19 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:40.231 15:45:19 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.231 15:45:19 -- target/shutdown.sh@28 -- # cat 00:21:40.231 15:45:19 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.231 15:45:19 -- target/shutdown.sh@28 -- # cat 00:21:40.231 15:45:19 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.231 15:45:19 -- target/shutdown.sh@28 -- # cat 00:21:40.231 15:45:19 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.231 15:45:19 -- target/shutdown.sh@28 -- # cat 00:21:40.231 15:45:19 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.231 15:45:19 -- target/shutdown.sh@28 -- # cat 00:21:40.231 15:45:19 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.231 15:45:19 -- target/shutdown.sh@28 -- # cat 00:21:40.231 15:45:19 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.231 15:45:19 -- target/shutdown.sh@28 -- # cat 00:21:40.231 15:45:19 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.231 15:45:19 -- target/shutdown.sh@28 -- # cat 00:21:40.231 15:45:19 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.231 15:45:19 -- target/shutdown.sh@28 -- # cat 00:21:40.231 15:45:19 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:40.231 15:45:19 -- target/shutdown.sh@28 -- # cat 00:21:40.231 15:45:19 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:40.231 15:45:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:40.231 15:45:19 -- common/autotest_common.sh@10 -- # set +x 00:21:40.231 Malloc1 00:21:40.231 [2024-07-10 15:45:19.448090] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:40.231 Malloc2 00:21:40.231 Malloc3 00:21:40.231 Malloc4 00:21:40.488 Malloc5 00:21:40.488 Malloc6 00:21:40.488 Malloc7 00:21:40.488 Malloc8 00:21:40.488 Malloc9 00:21:40.488 Malloc10 00:21:40.746 15:45:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:40.746 15:45:19 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:40.746 15:45:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:40.746 15:45:19 -- common/autotest_common.sh@10 -- # set +x 00:21:40.746 15:45:19 -- target/shutdown.sh@78 -- # perfpid=2176982 00:21:40.746 15:45:19 -- target/shutdown.sh@79 -- # waitforlisten 2176982 /var/tmp/bdevperf.sock 00:21:40.746 15:45:19 -- common/autotest_common.sh@819 -- # '[' -z 2176982 ']' 00:21:40.746 15:45:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:40.746 15:45:19 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:40.746 15:45:19 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:40.746 15:45:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:40.746 15:45:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:40.746 15:45:19 -- nvmf/common.sh@520 -- # config=() 00:21:40.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:40.746 15:45:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:40.746 15:45:19 -- nvmf/common.sh@520 -- # local subsystem config 00:21:40.746 15:45:19 -- common/autotest_common.sh@10 -- # set +x 00:21:40.746 15:45:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:40.746 15:45:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:40.746 { 00:21:40.746 "params": { 00:21:40.746 "name": "Nvme$subsystem", 00:21:40.746 "trtype": "$TEST_TRANSPORT", 00:21:40.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.746 "adrfam": "ipv4", 00:21:40.746 "trsvcid": "$NVMF_PORT", 00:21:40.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.746 "hdgst": ${hdgst:-false}, 00:21:40.746 "ddgst": ${ddgst:-false} 00:21:40.746 }, 00:21:40.746 "method": "bdev_nvme_attach_controller" 00:21:40.746 } 00:21:40.746 EOF 00:21:40.746 )") 00:21:40.746 15:45:19 -- nvmf/common.sh@542 -- # cat 00:21:40.746 15:45:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:40.746 15:45:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:40.746 { 00:21:40.746 "params": { 00:21:40.746 "name": "Nvme$subsystem", 00:21:40.746 "trtype": "$TEST_TRANSPORT", 00:21:40.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.746 "adrfam": "ipv4", 00:21:40.746 "trsvcid": "$NVMF_PORT", 00:21:40.746 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.746 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.746 "hdgst": ${hdgst:-false}, 00:21:40.746 "ddgst": ${ddgst:-false} 00:21:40.746 }, 00:21:40.746 "method": "bdev_nvme_attach_controller" 00:21:40.746 } 00:21:40.746 EOF 00:21:40.746 )") 00:21:40.746 15:45:19 -- nvmf/common.sh@542 -- # cat 00:21:40.746 15:45:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:40.746 15:45:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:40.746 { 00:21:40.746 "params": { 00:21:40.746 "name": "Nvme$subsystem", 00:21:40.746 "trtype": "$TEST_TRANSPORT", 00:21:40.746 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.746 "adrfam": "ipv4", 00:21:40.746 "trsvcid": "$NVMF_PORT", 00:21:40.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.747 "hdgst": ${hdgst:-false}, 00:21:40.747 "ddgst": ${ddgst:-false} 00:21:40.747 }, 00:21:40.747 "method": "bdev_nvme_attach_controller" 00:21:40.747 } 00:21:40.747 EOF 00:21:40.747 )") 00:21:40.747 15:45:19 -- nvmf/common.sh@542 -- # cat 00:21:40.747 15:45:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:40.747 15:45:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:40.747 { 00:21:40.747 "params": { 00:21:40.747 "name": "Nvme$subsystem", 00:21:40.747 "trtype": "$TEST_TRANSPORT", 00:21:40.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.747 "adrfam": "ipv4", 00:21:40.747 "trsvcid": "$NVMF_PORT", 00:21:40.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.747 "hdgst": ${hdgst:-false}, 00:21:40.747 "ddgst": ${ddgst:-false} 00:21:40.747 }, 00:21:40.747 "method": "bdev_nvme_attach_controller" 00:21:40.747 } 00:21:40.747 EOF 00:21:40.747 )") 00:21:40.747 15:45:19 -- nvmf/common.sh@542 -- # cat 00:21:40.747 15:45:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:40.747 15:45:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:40.747 { 00:21:40.747 "params": { 00:21:40.747 "name": "Nvme$subsystem", 00:21:40.747 "trtype": "$TEST_TRANSPORT", 00:21:40.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.747 "adrfam": "ipv4", 00:21:40.747 "trsvcid": "$NVMF_PORT", 00:21:40.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.747 "hdgst": ${hdgst:-false}, 00:21:40.747 "ddgst": ${ddgst:-false} 00:21:40.747 }, 00:21:40.747 "method": "bdev_nvme_attach_controller" 00:21:40.747 } 00:21:40.747 EOF 00:21:40.747 )") 00:21:40.747 15:45:19 -- nvmf/common.sh@542 -- # cat 00:21:40.747 15:45:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:40.747 15:45:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:40.747 { 00:21:40.747 "params": { 00:21:40.747 "name": "Nvme$subsystem", 00:21:40.747 "trtype": "$TEST_TRANSPORT", 00:21:40.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.747 "adrfam": "ipv4", 00:21:40.747 "trsvcid": "$NVMF_PORT", 00:21:40.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.747 "hdgst": ${hdgst:-false}, 00:21:40.747 "ddgst": ${ddgst:-false} 00:21:40.747 }, 00:21:40.747 "method": "bdev_nvme_attach_controller" 00:21:40.747 } 00:21:40.747 EOF 00:21:40.747 )") 00:21:40.747 15:45:19 -- nvmf/common.sh@542 -- # cat 00:21:40.747 15:45:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:40.747 15:45:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:40.747 { 00:21:40.747 "params": { 00:21:40.747 "name": "Nvme$subsystem", 00:21:40.747 "trtype": "$TEST_TRANSPORT", 00:21:40.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.747 "adrfam": "ipv4", 00:21:40.747 "trsvcid": "$NVMF_PORT", 00:21:40.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.747 "hdgst": ${hdgst:-false}, 00:21:40.747 "ddgst": ${ddgst:-false} 00:21:40.747 }, 00:21:40.747 "method": "bdev_nvme_attach_controller" 00:21:40.747 } 00:21:40.747 EOF 00:21:40.747 )") 00:21:40.747 15:45:19 -- nvmf/common.sh@542 -- # cat 00:21:40.747 15:45:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:40.747 15:45:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:40.747 { 00:21:40.747 "params": { 00:21:40.747 "name": "Nvme$subsystem", 00:21:40.747 "trtype": "$TEST_TRANSPORT", 00:21:40.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.747 "adrfam": "ipv4", 00:21:40.747 "trsvcid": "$NVMF_PORT", 00:21:40.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.747 "hdgst": ${hdgst:-false}, 00:21:40.747 "ddgst": ${ddgst:-false} 00:21:40.747 }, 00:21:40.747 "method": "bdev_nvme_attach_controller" 00:21:40.747 } 00:21:40.747 EOF 00:21:40.747 )") 00:21:40.747 15:45:19 -- nvmf/common.sh@542 -- # cat 00:21:40.747 15:45:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:40.747 15:45:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:40.747 { 00:21:40.747 "params": { 00:21:40.747 "name": "Nvme$subsystem", 00:21:40.747 "trtype": "$TEST_TRANSPORT", 00:21:40.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.747 "adrfam": "ipv4", 00:21:40.747 "trsvcid": "$NVMF_PORT", 00:21:40.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.747 "hdgst": ${hdgst:-false}, 00:21:40.747 "ddgst": ${ddgst:-false} 00:21:40.747 }, 00:21:40.747 "method": "bdev_nvme_attach_controller" 00:21:40.747 } 00:21:40.747 EOF 00:21:40.747 )") 00:21:40.747 15:45:19 -- nvmf/common.sh@542 -- # cat 00:21:40.747 15:45:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:40.747 15:45:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:40.747 { 00:21:40.747 "params": { 00:21:40.747 "name": "Nvme$subsystem", 00:21:40.747 "trtype": "$TEST_TRANSPORT", 00:21:40.747 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:40.747 "adrfam": "ipv4", 00:21:40.747 "trsvcid": "$NVMF_PORT", 00:21:40.747 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:40.747 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:40.747 "hdgst": ${hdgst:-false}, 00:21:40.747 "ddgst": ${ddgst:-false} 00:21:40.747 }, 00:21:40.747 "method": "bdev_nvme_attach_controller" 00:21:40.747 } 00:21:40.747 EOF 00:21:40.747 )") 00:21:40.747 15:45:19 -- nvmf/common.sh@542 -- # cat 00:21:40.747 15:45:19 -- nvmf/common.sh@544 -- # jq . 00:21:40.747 15:45:19 -- nvmf/common.sh@545 -- # IFS=, 00:21:40.747 15:45:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:40.747 "params": { 00:21:40.747 "name": "Nvme1", 00:21:40.747 "trtype": "tcp", 00:21:40.747 "traddr": "10.0.0.2", 00:21:40.747 "adrfam": "ipv4", 00:21:40.747 "trsvcid": "4420", 00:21:40.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:40.747 "hdgst": false, 00:21:40.747 "ddgst": false 00:21:40.747 }, 00:21:40.747 "method": "bdev_nvme_attach_controller" 00:21:40.747 },{ 00:21:40.747 "params": { 00:21:40.747 "name": "Nvme2", 00:21:40.747 "trtype": "tcp", 00:21:40.747 "traddr": "10.0.0.2", 00:21:40.747 "adrfam": "ipv4", 00:21:40.747 "trsvcid": "4420", 00:21:40.747 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:40.747 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:40.747 "hdgst": false, 00:21:40.747 "ddgst": false 00:21:40.747 }, 00:21:40.747 "method": "bdev_nvme_attach_controller" 00:21:40.747 },{ 00:21:40.747 "params": { 00:21:40.747 "name": "Nvme3", 00:21:40.747 "trtype": "tcp", 00:21:40.747 "traddr": "10.0.0.2", 00:21:40.747 "adrfam": "ipv4", 00:21:40.747 "trsvcid": "4420", 00:21:40.747 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:40.747 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:40.747 "hdgst": false, 00:21:40.747 "ddgst": false 00:21:40.747 }, 00:21:40.747 "method": "bdev_nvme_attach_controller" 00:21:40.747 },{ 00:21:40.747 "params": { 00:21:40.747 "name": "Nvme4", 00:21:40.747 "trtype": "tcp", 00:21:40.747 "traddr": "10.0.0.2", 00:21:40.747 "adrfam": "ipv4", 00:21:40.747 "trsvcid": "4420", 00:21:40.747 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:40.747 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:40.747 "hdgst": false, 00:21:40.747 "ddgst": false 00:21:40.747 }, 00:21:40.747 "method": "bdev_nvme_attach_controller" 00:21:40.747 },{ 00:21:40.747 "params": { 00:21:40.747 "name": "Nvme5", 00:21:40.747 "trtype": "tcp", 00:21:40.747 "traddr": "10.0.0.2", 00:21:40.747 "adrfam": "ipv4", 00:21:40.747 "trsvcid": "4420", 00:21:40.747 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:40.747 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:40.747 "hdgst": false, 00:21:40.747 "ddgst": false 00:21:40.747 }, 00:21:40.747 "method": "bdev_nvme_attach_controller" 00:21:40.747 },{ 00:21:40.747 "params": { 00:21:40.747 "name": "Nvme6", 00:21:40.747 "trtype": "tcp", 00:21:40.747 "traddr": "10.0.0.2", 00:21:40.747 "adrfam": "ipv4", 00:21:40.747 "trsvcid": "4420", 00:21:40.747 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:40.747 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:40.747 "hdgst": false, 00:21:40.747 "ddgst": false 00:21:40.747 }, 00:21:40.747 "method": "bdev_nvme_attach_controller" 00:21:40.747 },{ 00:21:40.747 "params": { 00:21:40.747 "name": "Nvme7", 00:21:40.747 "trtype": "tcp", 00:21:40.747 "traddr": "10.0.0.2", 00:21:40.747 "adrfam": "ipv4", 00:21:40.747 "trsvcid": "4420", 00:21:40.747 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:40.747 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:40.747 "hdgst": false, 00:21:40.747 "ddgst": false 00:21:40.747 }, 00:21:40.747 "method": "bdev_nvme_attach_controller" 00:21:40.747 },{ 00:21:40.747 "params": { 00:21:40.747 "name": "Nvme8", 00:21:40.747 "trtype": "tcp", 00:21:40.747 "traddr": "10.0.0.2", 00:21:40.747 "adrfam": "ipv4", 00:21:40.747 "trsvcid": "4420", 00:21:40.747 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:40.747 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:40.747 "hdgst": false, 00:21:40.747 "ddgst": false 00:21:40.747 }, 00:21:40.747 "method": "bdev_nvme_attach_controller" 00:21:40.747 },{ 00:21:40.747 "params": { 00:21:40.747 "name": "Nvme9", 00:21:40.747 "trtype": "tcp", 00:21:40.747 "traddr": "10.0.0.2", 00:21:40.747 "adrfam": "ipv4", 00:21:40.747 "trsvcid": "4420", 00:21:40.747 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:40.747 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:40.747 "hdgst": false, 00:21:40.747 "ddgst": false 00:21:40.747 }, 00:21:40.748 "method": "bdev_nvme_attach_controller" 00:21:40.748 },{ 00:21:40.748 "params": { 00:21:40.748 "name": "Nvme10", 00:21:40.748 "trtype": "tcp", 00:21:40.748 "traddr": "10.0.0.2", 00:21:40.748 "adrfam": "ipv4", 00:21:40.748 "trsvcid": "4420", 00:21:40.748 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:40.748 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:40.748 "hdgst": false, 00:21:40.748 "ddgst": false 00:21:40.748 }, 00:21:40.748 "method": "bdev_nvme_attach_controller" 00:21:40.748 }' 00:21:40.748 [2024-07-10 15:45:19.937184] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:40.748 [2024-07-10 15:45:19.937272] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:40.748 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.748 [2024-07-10 15:45:20.002524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.748 [2024-07-10 15:45:20.111769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.641 15:45:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:42.641 15:45:21 -- common/autotest_common.sh@852 -- # return 0 00:21:42.641 15:45:21 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:42.641 15:45:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:42.641 15:45:21 -- common/autotest_common.sh@10 -- # set +x 00:21:42.641 15:45:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:42.641 15:45:21 -- target/shutdown.sh@83 -- # kill -9 2176982 00:21:42.641 15:45:21 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:42.641 15:45:21 -- target/shutdown.sh@87 -- # sleep 1 00:21:43.573 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2176982 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:43.573 15:45:22 -- target/shutdown.sh@88 -- # kill -0 2176787 00:21:43.573 15:45:22 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:43.573 15:45:22 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:43.573 15:45:22 -- nvmf/common.sh@520 -- # config=() 00:21:43.573 15:45:22 -- nvmf/common.sh@520 -- # local subsystem config 00:21:43.573 15:45:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:43.573 15:45:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:43.573 { 00:21:43.573 "params": { 00:21:43.573 "name": "Nvme$subsystem", 00:21:43.573 "trtype": "$TEST_TRANSPORT", 00:21:43.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.573 "adrfam": "ipv4", 00:21:43.573 "trsvcid": "$NVMF_PORT", 00:21:43.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.573 "hdgst": ${hdgst:-false}, 00:21:43.573 "ddgst": ${ddgst:-false} 00:21:43.573 }, 00:21:43.573 "method": "bdev_nvme_attach_controller" 00:21:43.573 } 00:21:43.573 EOF 00:21:43.573 )") 00:21:43.573 15:45:22 -- nvmf/common.sh@542 -- # cat 00:21:43.573 15:45:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:43.573 15:45:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:43.573 { 00:21:43.573 "params": { 00:21:43.573 "name": "Nvme$subsystem", 00:21:43.573 "trtype": "$TEST_TRANSPORT", 00:21:43.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.573 "adrfam": "ipv4", 00:21:43.573 "trsvcid": "$NVMF_PORT", 00:21:43.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.573 "hdgst": ${hdgst:-false}, 00:21:43.573 "ddgst": ${ddgst:-false} 00:21:43.573 }, 00:21:43.573 "method": "bdev_nvme_attach_controller" 00:21:43.573 } 00:21:43.573 EOF 00:21:43.573 )") 00:21:43.573 15:45:22 -- nvmf/common.sh@542 -- # cat 00:21:43.573 15:45:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:43.573 15:45:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:43.573 { 00:21:43.573 "params": { 00:21:43.573 "name": "Nvme$subsystem", 00:21:43.573 "trtype": "$TEST_TRANSPORT", 00:21:43.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.573 "adrfam": "ipv4", 00:21:43.573 "trsvcid": "$NVMF_PORT", 00:21:43.573 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.573 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.573 "hdgst": ${hdgst:-false}, 00:21:43.573 "ddgst": ${ddgst:-false} 00:21:43.573 }, 00:21:43.574 "method": "bdev_nvme_attach_controller" 00:21:43.574 } 00:21:43.574 EOF 00:21:43.574 )") 00:21:43.574 15:45:22 -- nvmf/common.sh@542 -- # cat 00:21:43.574 15:45:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:43.574 15:45:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:43.574 { 00:21:43.574 "params": { 00:21:43.574 "name": "Nvme$subsystem", 00:21:43.574 "trtype": "$TEST_TRANSPORT", 00:21:43.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.574 "adrfam": "ipv4", 00:21:43.574 "trsvcid": "$NVMF_PORT", 00:21:43.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.574 "hdgst": ${hdgst:-false}, 00:21:43.574 "ddgst": ${ddgst:-false} 00:21:43.574 }, 00:21:43.574 "method": "bdev_nvme_attach_controller" 00:21:43.574 } 00:21:43.574 EOF 00:21:43.574 )") 00:21:43.574 15:45:22 -- nvmf/common.sh@542 -- # cat 00:21:43.574 15:45:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:43.574 15:45:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:43.574 { 00:21:43.574 "params": { 00:21:43.574 "name": "Nvme$subsystem", 00:21:43.574 "trtype": "$TEST_TRANSPORT", 00:21:43.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.574 "adrfam": "ipv4", 00:21:43.574 "trsvcid": "$NVMF_PORT", 00:21:43.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.574 "hdgst": ${hdgst:-false}, 00:21:43.574 "ddgst": ${ddgst:-false} 00:21:43.574 }, 00:21:43.574 "method": "bdev_nvme_attach_controller" 00:21:43.574 } 00:21:43.574 EOF 00:21:43.574 )") 00:21:43.574 15:45:22 -- nvmf/common.sh@542 -- # cat 00:21:43.574 15:45:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:43.574 15:45:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:43.574 { 00:21:43.574 "params": { 00:21:43.574 "name": "Nvme$subsystem", 00:21:43.574 "trtype": "$TEST_TRANSPORT", 00:21:43.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.574 "adrfam": "ipv4", 00:21:43.574 "trsvcid": "$NVMF_PORT", 00:21:43.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.574 "hdgst": ${hdgst:-false}, 00:21:43.574 "ddgst": ${ddgst:-false} 00:21:43.574 }, 00:21:43.574 "method": "bdev_nvme_attach_controller" 00:21:43.574 } 00:21:43.574 EOF 00:21:43.574 )") 00:21:43.574 15:45:22 -- nvmf/common.sh@542 -- # cat 00:21:43.574 15:45:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:43.574 15:45:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:43.574 { 00:21:43.574 "params": { 00:21:43.574 "name": "Nvme$subsystem", 00:21:43.574 "trtype": "$TEST_TRANSPORT", 00:21:43.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.574 "adrfam": "ipv4", 00:21:43.574 "trsvcid": "$NVMF_PORT", 00:21:43.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.574 "hdgst": ${hdgst:-false}, 00:21:43.574 "ddgst": ${ddgst:-false} 00:21:43.574 }, 00:21:43.574 "method": "bdev_nvme_attach_controller" 00:21:43.574 } 00:21:43.574 EOF 00:21:43.574 )") 00:21:43.574 15:45:22 -- nvmf/common.sh@542 -- # cat 00:21:43.574 15:45:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:43.574 15:45:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:43.574 { 00:21:43.574 "params": { 00:21:43.574 "name": "Nvme$subsystem", 00:21:43.574 "trtype": "$TEST_TRANSPORT", 00:21:43.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.574 "adrfam": "ipv4", 00:21:43.574 "trsvcid": "$NVMF_PORT", 00:21:43.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.574 "hdgst": ${hdgst:-false}, 00:21:43.574 "ddgst": ${ddgst:-false} 00:21:43.574 }, 00:21:43.574 "method": "bdev_nvme_attach_controller" 00:21:43.574 } 00:21:43.574 EOF 00:21:43.574 )") 00:21:43.574 15:45:22 -- nvmf/common.sh@542 -- # cat 00:21:43.574 15:45:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:43.574 15:45:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:43.574 { 00:21:43.574 "params": { 00:21:43.574 "name": "Nvme$subsystem", 00:21:43.574 "trtype": "$TEST_TRANSPORT", 00:21:43.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.574 "adrfam": "ipv4", 00:21:43.574 "trsvcid": "$NVMF_PORT", 00:21:43.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.574 "hdgst": ${hdgst:-false}, 00:21:43.574 "ddgst": ${ddgst:-false} 00:21:43.574 }, 00:21:43.574 "method": "bdev_nvme_attach_controller" 00:21:43.574 } 00:21:43.574 EOF 00:21:43.574 )") 00:21:43.574 15:45:22 -- nvmf/common.sh@542 -- # cat 00:21:43.574 15:45:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:43.574 15:45:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:43.574 { 00:21:43.574 "params": { 00:21:43.574 "name": "Nvme$subsystem", 00:21:43.574 "trtype": "$TEST_TRANSPORT", 00:21:43.574 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.574 "adrfam": "ipv4", 00:21:43.574 "trsvcid": "$NVMF_PORT", 00:21:43.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.574 "hdgst": ${hdgst:-false}, 00:21:43.574 "ddgst": ${ddgst:-false} 00:21:43.574 }, 00:21:43.574 "method": "bdev_nvme_attach_controller" 00:21:43.574 } 00:21:43.574 EOF 00:21:43.574 )") 00:21:43.574 15:45:22 -- nvmf/common.sh@542 -- # cat 00:21:43.574 15:45:22 -- nvmf/common.sh@544 -- # jq . 00:21:43.574 15:45:22 -- nvmf/common.sh@545 -- # IFS=, 00:21:43.574 15:45:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:43.574 "params": { 00:21:43.574 "name": "Nvme1", 00:21:43.574 "trtype": "tcp", 00:21:43.574 "traddr": "10.0.0.2", 00:21:43.574 "adrfam": "ipv4", 00:21:43.574 "trsvcid": "4420", 00:21:43.574 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.574 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.574 "hdgst": false, 00:21:43.574 "ddgst": false 00:21:43.574 }, 00:21:43.574 "method": "bdev_nvme_attach_controller" 00:21:43.574 },{ 00:21:43.574 "params": { 00:21:43.574 "name": "Nvme2", 00:21:43.574 "trtype": "tcp", 00:21:43.574 "traddr": "10.0.0.2", 00:21:43.574 "adrfam": "ipv4", 00:21:43.574 "trsvcid": "4420", 00:21:43.574 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:43.574 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:43.574 "hdgst": false, 00:21:43.574 "ddgst": false 00:21:43.574 }, 00:21:43.574 "method": "bdev_nvme_attach_controller" 00:21:43.574 },{ 00:21:43.574 "params": { 00:21:43.574 "name": "Nvme3", 00:21:43.574 "trtype": "tcp", 00:21:43.574 "traddr": "10.0.0.2", 00:21:43.574 "adrfam": "ipv4", 00:21:43.574 "trsvcid": "4420", 00:21:43.574 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:43.574 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:43.574 "hdgst": false, 00:21:43.574 "ddgst": false 00:21:43.574 }, 00:21:43.574 "method": "bdev_nvme_attach_controller" 00:21:43.574 },{ 00:21:43.574 "params": { 00:21:43.574 "name": "Nvme4", 00:21:43.574 "trtype": "tcp", 00:21:43.574 "traddr": "10.0.0.2", 00:21:43.574 "adrfam": "ipv4", 00:21:43.574 "trsvcid": "4420", 00:21:43.574 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:43.574 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:43.574 "hdgst": false, 00:21:43.574 "ddgst": false 00:21:43.574 }, 00:21:43.574 "method": "bdev_nvme_attach_controller" 00:21:43.574 },{ 00:21:43.574 "params": { 00:21:43.574 "name": "Nvme5", 00:21:43.574 "trtype": "tcp", 00:21:43.574 "traddr": "10.0.0.2", 00:21:43.574 "adrfam": "ipv4", 00:21:43.574 "trsvcid": "4420", 00:21:43.574 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:43.574 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:43.574 "hdgst": false, 00:21:43.574 "ddgst": false 00:21:43.574 }, 00:21:43.574 "method": "bdev_nvme_attach_controller" 00:21:43.574 },{ 00:21:43.574 "params": { 00:21:43.574 "name": "Nvme6", 00:21:43.574 "trtype": "tcp", 00:21:43.574 "traddr": "10.0.0.2", 00:21:43.574 "adrfam": "ipv4", 00:21:43.574 "trsvcid": "4420", 00:21:43.574 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:43.574 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:43.574 "hdgst": false, 00:21:43.574 "ddgst": false 00:21:43.574 }, 00:21:43.574 "method": "bdev_nvme_attach_controller" 00:21:43.574 },{ 00:21:43.574 "params": { 00:21:43.574 "name": "Nvme7", 00:21:43.574 "trtype": "tcp", 00:21:43.574 "traddr": "10.0.0.2", 00:21:43.574 "adrfam": "ipv4", 00:21:43.574 "trsvcid": "4420", 00:21:43.574 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:43.574 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:43.574 "hdgst": false, 00:21:43.574 "ddgst": false 00:21:43.574 }, 00:21:43.574 "method": "bdev_nvme_attach_controller" 00:21:43.574 },{ 00:21:43.574 "params": { 00:21:43.574 "name": "Nvme8", 00:21:43.574 "trtype": "tcp", 00:21:43.574 "traddr": "10.0.0.2", 00:21:43.574 "adrfam": "ipv4", 00:21:43.574 "trsvcid": "4420", 00:21:43.574 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:43.574 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:43.574 "hdgst": false, 00:21:43.574 "ddgst": false 00:21:43.574 }, 00:21:43.574 "method": "bdev_nvme_attach_controller" 00:21:43.574 },{ 00:21:43.574 "params": { 00:21:43.574 "name": "Nvme9", 00:21:43.574 "trtype": "tcp", 00:21:43.575 "traddr": "10.0.0.2", 00:21:43.575 "adrfam": "ipv4", 00:21:43.575 "trsvcid": "4420", 00:21:43.575 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:43.575 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:43.575 "hdgst": false, 00:21:43.575 "ddgst": false 00:21:43.575 }, 00:21:43.575 "method": "bdev_nvme_attach_controller" 00:21:43.575 },{ 00:21:43.575 "params": { 00:21:43.575 "name": "Nvme10", 00:21:43.575 "trtype": "tcp", 00:21:43.575 "traddr": "10.0.0.2", 00:21:43.575 "adrfam": "ipv4", 00:21:43.575 "trsvcid": "4420", 00:21:43.575 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:43.575 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:43.575 "hdgst": false, 00:21:43.575 "ddgst": false 00:21:43.575 }, 00:21:43.575 "method": "bdev_nvme_attach_controller" 00:21:43.575 }' 00:21:43.575 [2024-07-10 15:45:22.665585] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:43.575 [2024-07-10 15:45:22.665673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2177409 ] 00:21:43.575 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.575 [2024-07-10 15:45:22.731891] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.575 [2024-07-10 15:45:22.840084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.469 Running I/O for 1 seconds... 00:21:46.402 00:21:46.402 Latency(us) 00:21:46.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.402 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.402 Verification LBA range: start 0x0 length 0x400 00:21:46.402 Nvme1n1 : 1.09 397.95 24.87 0.00 0.00 157136.29 26214.40 138256.69 00:21:46.402 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.402 Verification LBA range: start 0x0 length 0x400 00:21:46.402 Nvme2n1 : 1.08 407.34 25.46 0.00 0.00 152079.79 31068.92 136703.24 00:21:46.402 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.402 Verification LBA range: start 0x0 length 0x400 00:21:46.402 Nvme3n1 : 1.08 403.34 25.21 0.00 0.00 152688.92 26991.12 120392.06 00:21:46.402 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.402 Verification LBA range: start 0x0 length 0x400 00:21:46.403 Nvme4n1 : 1.06 376.90 23.56 0.00 0.00 163058.67 20874.43 149907.53 00:21:46.403 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.403 Verification LBA range: start 0x0 length 0x400 00:21:46.403 Nvme5n1 : 1.06 375.73 23.48 0.00 0.00 162184.16 7912.87 142140.30 00:21:46.403 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.403 Verification LBA range: start 0x0 length 0x400 00:21:46.403 Nvme6n1 : 1.09 399.83 24.99 0.00 0.00 150348.47 30680.56 122722.23 00:21:46.403 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.403 Verification LBA range: start 0x0 length 0x400 00:21:46.403 Nvme7n1 : 1.09 397.16 24.82 0.00 0.00 152387.03 13495.56 125829.12 00:21:46.403 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.403 Verification LBA range: start 0x0 length 0x400 00:21:46.403 Nvme8n1 : 1.09 398.69 24.92 0.00 0.00 149241.99 25631.86 118838.61 00:21:46.403 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.403 Verification LBA range: start 0x0 length 0x400 00:21:46.403 Nvme9n1 : 1.10 402.55 25.16 0.00 0.00 148299.98 10777.03 132042.90 00:21:46.403 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.403 Verification LBA range: start 0x0 length 0x400 00:21:46.403 Nvme10n1 : 1.10 441.14 27.57 0.00 0.00 135275.10 4247.70 122722.23 00:21:46.403 =================================================================================================================== 00:21:46.403 Total : 4000.63 250.04 0.00 0.00 151888.68 4247.70 149907.53 00:21:46.403 15:45:25 -- target/shutdown.sh@93 -- # stoptarget 00:21:46.403 15:45:25 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:46.403 15:45:25 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:46.403 15:45:25 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:46.403 15:45:25 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:46.403 15:45:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:46.403 15:45:25 -- nvmf/common.sh@116 -- # sync 00:21:46.403 15:45:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:46.403 15:45:25 -- nvmf/common.sh@119 -- # set +e 00:21:46.403 15:45:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:46.403 15:45:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:46.403 rmmod nvme_tcp 00:21:46.403 rmmod nvme_fabrics 00:21:46.660 rmmod nvme_keyring 00:21:46.660 15:45:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:46.660 15:45:25 -- nvmf/common.sh@123 -- # set -e 00:21:46.660 15:45:25 -- nvmf/common.sh@124 -- # return 0 00:21:46.660 15:45:25 -- nvmf/common.sh@477 -- # '[' -n 2176787 ']' 00:21:46.660 15:45:25 -- nvmf/common.sh@478 -- # killprocess 2176787 00:21:46.660 15:45:25 -- common/autotest_common.sh@926 -- # '[' -z 2176787 ']' 00:21:46.660 15:45:25 -- common/autotest_common.sh@930 -- # kill -0 2176787 00:21:46.660 15:45:25 -- common/autotest_common.sh@931 -- # uname 00:21:46.660 15:45:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:46.660 15:45:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2176787 00:21:46.660 15:45:25 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:46.660 15:45:25 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:46.660 15:45:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2176787' 00:21:46.660 killing process with pid 2176787 00:21:46.660 15:45:25 -- common/autotest_common.sh@945 -- # kill 2176787 00:21:46.660 15:45:25 -- common/autotest_common.sh@950 -- # wait 2176787 00:21:47.227 15:45:26 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:47.227 15:45:26 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:47.227 15:45:26 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:47.227 15:45:26 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:47.227 15:45:26 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:47.227 15:45:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.227 15:45:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.227 15:45:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.132 15:45:28 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:49.132 00:21:49.132 real 0m12.376s 00:21:49.132 user 0m36.008s 00:21:49.132 sys 0m3.315s 00:21:49.132 15:45:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:49.132 15:45:28 -- common/autotest_common.sh@10 -- # set +x 00:21:49.132 ************************************ 00:21:49.132 END TEST nvmf_shutdown_tc1 00:21:49.132 ************************************ 00:21:49.132 15:45:28 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:49.132 15:45:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:49.132 15:45:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:49.132 15:45:28 -- common/autotest_common.sh@10 -- # set +x 00:21:49.132 ************************************ 00:21:49.132 START TEST nvmf_shutdown_tc2 00:21:49.132 ************************************ 00:21:49.132 15:45:28 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:21:49.132 15:45:28 -- target/shutdown.sh@98 -- # starttarget 00:21:49.132 15:45:28 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:49.132 15:45:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:49.132 15:45:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.132 15:45:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:49.132 15:45:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:49.132 15:45:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:49.132 15:45:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.132 15:45:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:49.132 15:45:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.132 15:45:28 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:49.132 15:45:28 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:49.132 15:45:28 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:49.132 15:45:28 -- common/autotest_common.sh@10 -- # set +x 00:21:49.132 15:45:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:49.132 15:45:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:49.132 15:45:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:49.132 15:45:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:49.132 15:45:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:49.132 15:45:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:49.132 15:45:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:49.132 15:45:28 -- nvmf/common.sh@294 -- # net_devs=() 00:21:49.132 15:45:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:49.132 15:45:28 -- nvmf/common.sh@295 -- # e810=() 00:21:49.132 15:45:28 -- nvmf/common.sh@295 -- # local -ga e810 00:21:49.132 15:45:28 -- nvmf/common.sh@296 -- # x722=() 00:21:49.132 15:45:28 -- nvmf/common.sh@296 -- # local -ga x722 00:21:49.132 15:45:28 -- nvmf/common.sh@297 -- # mlx=() 00:21:49.132 15:45:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:49.132 15:45:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:49.132 15:45:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:49.132 15:45:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:49.132 15:45:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:49.132 15:45:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:49.132 15:45:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:49.132 15:45:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:49.132 15:45:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:49.132 15:45:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:49.132 15:45:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:49.132 15:45:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:49.132 15:45:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:49.132 15:45:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:49.132 15:45:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:49.132 15:45:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:49.132 15:45:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:49.132 15:45:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:49.132 15:45:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:49.132 15:45:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:49.132 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:49.132 15:45:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:49.132 15:45:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:49.132 15:45:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.132 15:45:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.132 15:45:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:49.132 15:45:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:49.132 15:45:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:49.132 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:49.132 15:45:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:49.132 15:45:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:49.132 15:45:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:49.132 15:45:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:49.132 15:45:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:49.132 15:45:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:49.132 15:45:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:49.132 15:45:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:49.132 15:45:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:49.133 15:45:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.133 15:45:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:49.133 15:45:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.133 15:45:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:49.133 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:49.133 15:45:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.133 15:45:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:49.133 15:45:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:49.133 15:45:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:49.133 15:45:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:49.133 15:45:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:49.133 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:49.133 15:45:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:49.133 15:45:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:49.133 15:45:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:49.133 15:45:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:49.133 15:45:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:49.133 15:45:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:49.133 15:45:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.133 15:45:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:49.133 15:45:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:49.133 15:45:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:49.133 15:45:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:49.133 15:45:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:49.133 15:45:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:49.133 15:45:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:49.133 15:45:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.133 15:45:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:49.133 15:45:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:49.133 15:45:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:49.133 15:45:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:49.392 15:45:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:49.392 15:45:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:49.392 15:45:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:49.392 15:45:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:49.392 15:45:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:49.392 15:45:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:49.392 15:45:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:49.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:21:49.392 00:21:49.392 --- 10.0.0.2 ping statistics --- 00:21:49.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.392 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:21:49.392 15:45:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:49.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.069 ms 00:21:49.392 00:21:49.392 --- 10.0.0.1 ping statistics --- 00:21:49.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.392 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:21:49.392 15:45:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.392 15:45:28 -- nvmf/common.sh@410 -- # return 0 00:21:49.392 15:45:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:49.392 15:45:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.392 15:45:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:49.392 15:45:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:49.392 15:45:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.392 15:45:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:49.392 15:45:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:49.392 15:45:28 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:49.392 15:45:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:49.392 15:45:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:49.392 15:45:28 -- common/autotest_common.sh@10 -- # set +x 00:21:49.392 15:45:28 -- nvmf/common.sh@469 -- # nvmfpid=2178198 00:21:49.392 15:45:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:49.392 15:45:28 -- nvmf/common.sh@470 -- # waitforlisten 2178198 00:21:49.392 15:45:28 -- common/autotest_common.sh@819 -- # '[' -z 2178198 ']' 00:21:49.392 15:45:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.392 15:45:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:49.392 15:45:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.392 15:45:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:49.392 15:45:28 -- common/autotest_common.sh@10 -- # set +x 00:21:49.392 [2024-07-10 15:45:28.638725] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:49.392 [2024-07-10 15:45:28.638802] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.392 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.392 [2024-07-10 15:45:28.704955] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:49.651 [2024-07-10 15:45:28.820955] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:49.651 [2024-07-10 15:45:28.821121] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.651 [2024-07-10 15:45:28.821143] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.651 [2024-07-10 15:45:28.821157] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.651 [2024-07-10 15:45:28.821263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:49.651 [2024-07-10 15:45:28.821357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:49.651 [2024-07-10 15:45:28.821433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.651 [2024-07-10 15:45:28.821436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:50.585 15:45:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:50.585 15:45:29 -- common/autotest_common.sh@852 -- # return 0 00:21:50.585 15:45:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:50.585 15:45:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:50.585 15:45:29 -- common/autotest_common.sh@10 -- # set +x 00:21:50.585 15:45:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.585 15:45:29 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:50.585 15:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.585 15:45:29 -- common/autotest_common.sh@10 -- # set +x 00:21:50.585 [2024-07-10 15:45:29.632989] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:50.585 15:45:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.585 15:45:29 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:50.585 15:45:29 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:50.585 15:45:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:50.585 15:45:29 -- common/autotest_common.sh@10 -- # set +x 00:21:50.585 15:45:29 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:50.585 15:45:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.585 15:45:29 -- target/shutdown.sh@28 -- # cat 00:21:50.585 15:45:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.585 15:45:29 -- target/shutdown.sh@28 -- # cat 00:21:50.585 15:45:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.585 15:45:29 -- target/shutdown.sh@28 -- # cat 00:21:50.585 15:45:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.585 15:45:29 -- target/shutdown.sh@28 -- # cat 00:21:50.585 15:45:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.585 15:45:29 -- target/shutdown.sh@28 -- # cat 00:21:50.585 15:45:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.585 15:45:29 -- target/shutdown.sh@28 -- # cat 00:21:50.585 15:45:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.585 15:45:29 -- target/shutdown.sh@28 -- # cat 00:21:50.585 15:45:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.585 15:45:29 -- target/shutdown.sh@28 -- # cat 00:21:50.585 15:45:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.585 15:45:29 -- target/shutdown.sh@28 -- # cat 00:21:50.585 15:45:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:50.585 15:45:29 -- target/shutdown.sh@28 -- # cat 00:21:50.585 15:45:29 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:50.585 15:45:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.585 15:45:29 -- common/autotest_common.sh@10 -- # set +x 00:21:50.585 Malloc1 00:21:50.585 [2024-07-10 15:45:29.707952] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.585 Malloc2 00:21:50.585 Malloc3 00:21:50.585 Malloc4 00:21:50.585 Malloc5 00:21:50.585 Malloc6 00:21:50.845 Malloc7 00:21:50.845 Malloc8 00:21:50.845 Malloc9 00:21:50.845 Malloc10 00:21:50.845 15:45:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.845 15:45:30 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:50.845 15:45:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:50.845 15:45:30 -- common/autotest_common.sh@10 -- # set +x 00:21:50.845 15:45:30 -- target/shutdown.sh@102 -- # perfpid=2178396 00:21:50.845 15:45:30 -- target/shutdown.sh@103 -- # waitforlisten 2178396 /var/tmp/bdevperf.sock 00:21:50.845 15:45:30 -- common/autotest_common.sh@819 -- # '[' -z 2178396 ']' 00:21:50.845 15:45:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:50.845 15:45:30 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:50.845 15:45:30 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:50.845 15:45:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:50.845 15:45:30 -- nvmf/common.sh@520 -- # config=() 00:21:50.845 15:45:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:50.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:50.845 15:45:30 -- nvmf/common.sh@520 -- # local subsystem config 00:21:50.845 15:45:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:50.845 15:45:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.845 15:45:30 -- common/autotest_common.sh@10 -- # set +x 00:21:50.845 15:45:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.845 { 00:21:50.845 "params": { 00:21:50.845 "name": "Nvme$subsystem", 00:21:50.845 "trtype": "$TEST_TRANSPORT", 00:21:50.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.845 "adrfam": "ipv4", 00:21:50.845 "trsvcid": "$NVMF_PORT", 00:21:50.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.845 "hdgst": ${hdgst:-false}, 00:21:50.845 "ddgst": ${ddgst:-false} 00:21:50.845 }, 00:21:50.845 "method": "bdev_nvme_attach_controller" 00:21:50.845 } 00:21:50.845 EOF 00:21:50.845 )") 00:21:50.845 15:45:30 -- nvmf/common.sh@542 -- # cat 00:21:50.845 15:45:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.845 15:45:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.845 { 00:21:50.845 "params": { 00:21:50.845 "name": "Nvme$subsystem", 00:21:50.845 "trtype": "$TEST_TRANSPORT", 00:21:50.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.845 "adrfam": "ipv4", 00:21:50.845 "trsvcid": "$NVMF_PORT", 00:21:50.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.845 "hdgst": ${hdgst:-false}, 00:21:50.845 "ddgst": ${ddgst:-false} 00:21:50.845 }, 00:21:50.845 "method": "bdev_nvme_attach_controller" 00:21:50.845 } 00:21:50.845 EOF 00:21:50.845 )") 00:21:50.845 15:45:30 -- nvmf/common.sh@542 -- # cat 00:21:50.845 15:45:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.845 15:45:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.845 { 00:21:50.845 "params": { 00:21:50.845 "name": "Nvme$subsystem", 00:21:50.845 "trtype": "$TEST_TRANSPORT", 00:21:50.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.845 "adrfam": "ipv4", 00:21:50.845 "trsvcid": "$NVMF_PORT", 00:21:50.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.845 "hdgst": ${hdgst:-false}, 00:21:50.845 "ddgst": ${ddgst:-false} 00:21:50.845 }, 00:21:50.845 "method": "bdev_nvme_attach_controller" 00:21:50.845 } 00:21:50.845 EOF 00:21:50.845 )") 00:21:50.845 15:45:30 -- nvmf/common.sh@542 -- # cat 00:21:50.845 15:45:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.845 15:45:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.845 { 00:21:50.845 "params": { 00:21:50.845 "name": "Nvme$subsystem", 00:21:50.845 "trtype": "$TEST_TRANSPORT", 00:21:50.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.845 "adrfam": "ipv4", 00:21:50.845 "trsvcid": "$NVMF_PORT", 00:21:50.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.845 "hdgst": ${hdgst:-false}, 00:21:50.845 "ddgst": ${ddgst:-false} 00:21:50.845 }, 00:21:50.845 "method": "bdev_nvme_attach_controller" 00:21:50.845 } 00:21:50.845 EOF 00:21:50.845 )") 00:21:50.845 15:45:30 -- nvmf/common.sh@542 -- # cat 00:21:50.845 15:45:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.845 15:45:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.845 { 00:21:50.845 "params": { 00:21:50.845 "name": "Nvme$subsystem", 00:21:50.845 "trtype": "$TEST_TRANSPORT", 00:21:50.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.845 "adrfam": "ipv4", 00:21:50.845 "trsvcid": "$NVMF_PORT", 00:21:50.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.845 "hdgst": ${hdgst:-false}, 00:21:50.845 "ddgst": ${ddgst:-false} 00:21:50.845 }, 00:21:50.845 "method": "bdev_nvme_attach_controller" 00:21:50.845 } 00:21:50.845 EOF 00:21:50.845 )") 00:21:50.845 15:45:30 -- nvmf/common.sh@542 -- # cat 00:21:50.845 15:45:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.845 15:45:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.845 { 00:21:50.845 "params": { 00:21:50.845 "name": "Nvme$subsystem", 00:21:50.845 "trtype": "$TEST_TRANSPORT", 00:21:50.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.845 "adrfam": "ipv4", 00:21:50.845 "trsvcid": "$NVMF_PORT", 00:21:50.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.846 "hdgst": ${hdgst:-false}, 00:21:50.846 "ddgst": ${ddgst:-false} 00:21:50.846 }, 00:21:50.846 "method": "bdev_nvme_attach_controller" 00:21:50.846 } 00:21:50.846 EOF 00:21:50.846 )") 00:21:50.846 15:45:30 -- nvmf/common.sh@542 -- # cat 00:21:50.846 15:45:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.846 15:45:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.846 { 00:21:50.846 "params": { 00:21:50.846 "name": "Nvme$subsystem", 00:21:50.846 "trtype": "$TEST_TRANSPORT", 00:21:50.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.846 "adrfam": "ipv4", 00:21:50.846 "trsvcid": "$NVMF_PORT", 00:21:50.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.846 "hdgst": ${hdgst:-false}, 00:21:50.846 "ddgst": ${ddgst:-false} 00:21:50.846 }, 00:21:50.846 "method": "bdev_nvme_attach_controller" 00:21:50.846 } 00:21:50.846 EOF 00:21:50.846 )") 00:21:50.846 15:45:30 -- nvmf/common.sh@542 -- # cat 00:21:50.846 15:45:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.846 15:45:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.846 { 00:21:50.846 "params": { 00:21:50.846 "name": "Nvme$subsystem", 00:21:50.846 "trtype": "$TEST_TRANSPORT", 00:21:50.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.846 "adrfam": "ipv4", 00:21:50.846 "trsvcid": "$NVMF_PORT", 00:21:50.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.846 "hdgst": ${hdgst:-false}, 00:21:50.846 "ddgst": ${ddgst:-false} 00:21:50.846 }, 00:21:50.846 "method": "bdev_nvme_attach_controller" 00:21:50.846 } 00:21:50.846 EOF 00:21:50.846 )") 00:21:50.846 15:45:30 -- nvmf/common.sh@542 -- # cat 00:21:50.846 15:45:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.846 15:45:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.846 { 00:21:50.846 "params": { 00:21:50.846 "name": "Nvme$subsystem", 00:21:50.846 "trtype": "$TEST_TRANSPORT", 00:21:50.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.846 "adrfam": "ipv4", 00:21:50.846 "trsvcid": "$NVMF_PORT", 00:21:50.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.846 "hdgst": ${hdgst:-false}, 00:21:50.846 "ddgst": ${ddgst:-false} 00:21:50.846 }, 00:21:50.846 "method": "bdev_nvme_attach_controller" 00:21:50.846 } 00:21:50.846 EOF 00:21:50.846 )") 00:21:50.846 15:45:30 -- nvmf/common.sh@542 -- # cat 00:21:50.846 15:45:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:50.846 15:45:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:50.846 { 00:21:50.846 "params": { 00:21:50.846 "name": "Nvme$subsystem", 00:21:50.846 "trtype": "$TEST_TRANSPORT", 00:21:50.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:50.846 "adrfam": "ipv4", 00:21:50.846 "trsvcid": "$NVMF_PORT", 00:21:50.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:50.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:50.846 "hdgst": ${hdgst:-false}, 00:21:50.846 "ddgst": ${ddgst:-false} 00:21:50.846 }, 00:21:50.846 "method": "bdev_nvme_attach_controller" 00:21:50.846 } 00:21:50.846 EOF 00:21:50.846 )") 00:21:50.846 15:45:30 -- nvmf/common.sh@542 -- # cat 00:21:50.846 15:45:30 -- nvmf/common.sh@544 -- # jq . 00:21:50.846 15:45:30 -- nvmf/common.sh@545 -- # IFS=, 00:21:50.846 15:45:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:50.846 "params": { 00:21:50.846 "name": "Nvme1", 00:21:50.846 "trtype": "tcp", 00:21:50.846 "traddr": "10.0.0.2", 00:21:50.846 "adrfam": "ipv4", 00:21:50.846 "trsvcid": "4420", 00:21:50.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:50.846 "hdgst": false, 00:21:50.846 "ddgst": false 00:21:50.846 }, 00:21:50.846 "method": "bdev_nvme_attach_controller" 00:21:50.846 },{ 00:21:50.846 "params": { 00:21:50.846 "name": "Nvme2", 00:21:50.846 "trtype": "tcp", 00:21:50.846 "traddr": "10.0.0.2", 00:21:50.846 "adrfam": "ipv4", 00:21:50.846 "trsvcid": "4420", 00:21:50.846 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:50.846 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:50.846 "hdgst": false, 00:21:50.846 "ddgst": false 00:21:50.846 }, 00:21:50.846 "method": "bdev_nvme_attach_controller" 00:21:50.846 },{ 00:21:50.846 "params": { 00:21:50.846 "name": "Nvme3", 00:21:50.846 "trtype": "tcp", 00:21:50.846 "traddr": "10.0.0.2", 00:21:50.846 "adrfam": "ipv4", 00:21:50.846 "trsvcid": "4420", 00:21:50.846 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:50.846 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:50.846 "hdgst": false, 00:21:50.846 "ddgst": false 00:21:50.846 }, 00:21:50.846 "method": "bdev_nvme_attach_controller" 00:21:50.846 },{ 00:21:50.846 "params": { 00:21:50.846 "name": "Nvme4", 00:21:50.846 "trtype": "tcp", 00:21:50.846 "traddr": "10.0.0.2", 00:21:50.846 "adrfam": "ipv4", 00:21:50.846 "trsvcid": "4420", 00:21:50.846 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:50.846 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:50.846 "hdgst": false, 00:21:50.846 "ddgst": false 00:21:50.846 }, 00:21:50.846 "method": "bdev_nvme_attach_controller" 00:21:50.846 },{ 00:21:50.846 "params": { 00:21:50.846 "name": "Nvme5", 00:21:50.846 "trtype": "tcp", 00:21:50.846 "traddr": "10.0.0.2", 00:21:50.846 "adrfam": "ipv4", 00:21:50.846 "trsvcid": "4420", 00:21:50.846 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:50.846 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:50.846 "hdgst": false, 00:21:50.846 "ddgst": false 00:21:50.846 }, 00:21:50.846 "method": "bdev_nvme_attach_controller" 00:21:50.846 },{ 00:21:50.846 "params": { 00:21:50.846 "name": "Nvme6", 00:21:50.846 "trtype": "tcp", 00:21:50.846 "traddr": "10.0.0.2", 00:21:50.846 "adrfam": "ipv4", 00:21:50.846 "trsvcid": "4420", 00:21:50.846 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:50.846 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:50.846 "hdgst": false, 00:21:50.846 "ddgst": false 00:21:50.846 }, 00:21:50.846 "method": "bdev_nvme_attach_controller" 00:21:50.846 },{ 00:21:50.846 "params": { 00:21:50.846 "name": "Nvme7", 00:21:50.846 "trtype": "tcp", 00:21:50.846 "traddr": "10.0.0.2", 00:21:50.846 "adrfam": "ipv4", 00:21:50.846 "trsvcid": "4420", 00:21:50.846 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:50.846 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:50.846 "hdgst": false, 00:21:50.846 "ddgst": false 00:21:50.846 }, 00:21:50.846 "method": "bdev_nvme_attach_controller" 00:21:50.846 },{ 00:21:50.846 "params": { 00:21:50.846 "name": "Nvme8", 00:21:50.846 "trtype": "tcp", 00:21:50.846 "traddr": "10.0.0.2", 00:21:50.846 "adrfam": "ipv4", 00:21:50.846 "trsvcid": "4420", 00:21:50.846 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:50.846 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:50.846 "hdgst": false, 00:21:50.846 "ddgst": false 00:21:50.846 }, 00:21:50.846 "method": "bdev_nvme_attach_controller" 00:21:50.846 },{ 00:21:50.846 "params": { 00:21:50.846 "name": "Nvme9", 00:21:50.846 "trtype": "tcp", 00:21:50.846 "traddr": "10.0.0.2", 00:21:50.846 "adrfam": "ipv4", 00:21:50.846 "trsvcid": "4420", 00:21:50.846 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:50.846 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:50.846 "hdgst": false, 00:21:50.846 "ddgst": false 00:21:50.846 }, 00:21:50.846 "method": "bdev_nvme_attach_controller" 00:21:50.846 },{ 00:21:50.846 "params": { 00:21:50.846 "name": "Nvme10", 00:21:50.846 "trtype": "tcp", 00:21:50.846 "traddr": "10.0.0.2", 00:21:50.846 "adrfam": "ipv4", 00:21:50.846 "trsvcid": "4420", 00:21:50.846 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:50.846 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:50.846 "hdgst": false, 00:21:50.846 "ddgst": false 00:21:50.846 }, 00:21:50.846 "method": "bdev_nvme_attach_controller" 00:21:50.846 }' 00:21:50.846 [2024-07-10 15:45:30.198172] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:50.846 [2024-07-10 15:45:30.198262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178396 ] 00:21:51.104 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.104 [2024-07-10 15:45:30.261829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.104 [2024-07-10 15:45:30.369321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.004 Running I/O for 10 seconds... 00:21:53.571 15:45:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:53.571 15:45:32 -- common/autotest_common.sh@852 -- # return 0 00:21:53.571 15:45:32 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:53.571 15:45:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:53.571 15:45:32 -- common/autotest_common.sh@10 -- # set +x 00:21:53.571 15:45:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:53.571 15:45:32 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:53.571 15:45:32 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:53.571 15:45:32 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:53.571 15:45:32 -- target/shutdown.sh@57 -- # local ret=1 00:21:53.571 15:45:32 -- target/shutdown.sh@58 -- # local i 00:21:53.571 15:45:32 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:53.571 15:45:32 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:53.571 15:45:32 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:53.571 15:45:32 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:53.571 15:45:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:53.571 15:45:32 -- common/autotest_common.sh@10 -- # set +x 00:21:53.571 15:45:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:53.571 15:45:32 -- target/shutdown.sh@60 -- # read_io_count=254 00:21:53.571 15:45:32 -- target/shutdown.sh@63 -- # '[' 254 -ge 100 ']' 00:21:53.571 15:45:32 -- target/shutdown.sh@64 -- # ret=0 00:21:53.571 15:45:32 -- target/shutdown.sh@65 -- # break 00:21:53.571 15:45:32 -- target/shutdown.sh@69 -- # return 0 00:21:53.571 15:45:32 -- target/shutdown.sh@109 -- # killprocess 2178396 00:21:53.571 15:45:32 -- common/autotest_common.sh@926 -- # '[' -z 2178396 ']' 00:21:53.571 15:45:32 -- common/autotest_common.sh@930 -- # kill -0 2178396 00:21:53.571 15:45:32 -- common/autotest_common.sh@931 -- # uname 00:21:53.571 15:45:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:53.571 15:45:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2178396 00:21:53.571 15:45:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:53.571 15:45:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:53.571 15:45:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2178396' 00:21:53.571 killing process with pid 2178396 00:21:53.571 15:45:32 -- common/autotest_common.sh@945 -- # kill 2178396 00:21:53.571 15:45:32 -- common/autotest_common.sh@950 -- # wait 2178396 00:21:53.571 Received shutdown signal, test time was about 0.830591 seconds 00:21:53.571 00:21:53.571 Latency(us) 00:21:53.571 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.571 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.571 Verification LBA range: start 0x0 length 0x400 00:21:53.571 Nvme1n1 : 0.77 408.22 25.51 0.00 0.00 152502.16 25049.32 129712.73 00:21:53.571 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.571 Verification LBA range: start 0x0 length 0x400 00:21:53.571 Nvme2n1 : 0.83 379.63 23.73 0.00 0.00 154943.59 26408.58 134373.07 00:21:53.571 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.571 Verification LBA range: start 0x0 length 0x400 00:21:53.571 Nvme3n1 : 0.80 443.59 27.72 0.00 0.00 138978.57 13398.47 114178.28 00:21:53.571 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.571 Verification LBA range: start 0x0 length 0x400 00:21:53.571 Nvme4n1 : 0.79 398.08 24.88 0.00 0.00 151798.79 25049.32 136703.24 00:21:53.571 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.571 Verification LBA range: start 0x0 length 0x400 00:21:53.571 Nvme5n1 : 0.79 397.64 24.85 0.00 0.00 150467.88 24855.13 135926.52 00:21:53.571 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.571 Verification LBA range: start 0x0 length 0x400 00:21:53.571 Nvme6n1 : 0.79 396.47 24.78 0.00 0.00 149783.63 24563.86 131266.18 00:21:53.571 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.571 Verification LBA range: start 0x0 length 0x400 00:21:53.571 Nvme7n1 : 0.77 407.08 25.44 0.00 0.00 143960.36 21068.61 114178.28 00:21:53.571 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.571 Verification LBA range: start 0x0 length 0x400 00:21:53.571 Nvme8n1 : 0.78 404.99 25.31 0.00 0.00 143135.25 25631.86 115731.72 00:21:53.571 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.571 Verification LBA range: start 0x0 length 0x400 00:21:53.571 Nvme9n1 : 0.78 404.67 25.29 0.00 0.00 141742.68 4975.88 117285.17 00:21:53.571 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:53.571 Verification LBA range: start 0x0 length 0x400 00:21:53.571 Nvme10n1 : 0.80 394.83 24.68 0.00 0.00 145504.87 15049.01 118838.61 00:21:53.571 =================================================================================================================== 00:21:53.571 Total : 4035.20 252.20 0.00 0.00 147174.26 4975.88 136703.24 00:21:53.832 15:45:33 -- target/shutdown.sh@112 -- # sleep 1 00:21:54.763 15:45:34 -- target/shutdown.sh@113 -- # kill -0 2178198 00:21:54.763 15:45:34 -- target/shutdown.sh@115 -- # stoptarget 00:21:54.763 15:45:34 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:54.763 15:45:34 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:54.763 15:45:34 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:54.763 15:45:34 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:54.763 15:45:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:54.763 15:45:34 -- nvmf/common.sh@116 -- # sync 00:21:54.763 15:45:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:54.763 15:45:34 -- nvmf/common.sh@119 -- # set +e 00:21:54.763 15:45:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:54.763 15:45:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:54.763 rmmod nvme_tcp 00:21:54.763 rmmod nvme_fabrics 00:21:55.021 rmmod nvme_keyring 00:21:55.021 15:45:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:55.021 15:45:34 -- nvmf/common.sh@123 -- # set -e 00:21:55.021 15:45:34 -- nvmf/common.sh@124 -- # return 0 00:21:55.021 15:45:34 -- nvmf/common.sh@477 -- # '[' -n 2178198 ']' 00:21:55.021 15:45:34 -- nvmf/common.sh@478 -- # killprocess 2178198 00:21:55.021 15:45:34 -- common/autotest_common.sh@926 -- # '[' -z 2178198 ']' 00:21:55.021 15:45:34 -- common/autotest_common.sh@930 -- # kill -0 2178198 00:21:55.021 15:45:34 -- common/autotest_common.sh@931 -- # uname 00:21:55.021 15:45:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:55.021 15:45:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2178198 00:21:55.021 15:45:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:55.021 15:45:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:55.021 15:45:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2178198' 00:21:55.021 killing process with pid 2178198 00:21:55.021 15:45:34 -- common/autotest_common.sh@945 -- # kill 2178198 00:21:55.021 15:45:34 -- common/autotest_common.sh@950 -- # wait 2178198 00:21:55.587 15:45:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:55.587 15:45:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:55.587 15:45:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:55.587 15:45:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.587 15:45:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:55.587 15:45:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.587 15:45:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.587 15:45:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.492 15:45:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:57.492 00:21:57.492 real 0m8.344s 00:21:57.492 user 0m26.440s 00:21:57.492 sys 0m1.499s 00:21:57.492 15:45:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:57.492 15:45:36 -- common/autotest_common.sh@10 -- # set +x 00:21:57.492 ************************************ 00:21:57.492 END TEST nvmf_shutdown_tc2 00:21:57.492 ************************************ 00:21:57.492 15:45:36 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:57.492 15:45:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:21:57.492 15:45:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:57.492 15:45:36 -- common/autotest_common.sh@10 -- # set +x 00:21:57.492 ************************************ 00:21:57.492 START TEST nvmf_shutdown_tc3 00:21:57.492 ************************************ 00:21:57.492 15:45:36 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:21:57.492 15:45:36 -- target/shutdown.sh@120 -- # starttarget 00:21:57.492 15:45:36 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:57.492 15:45:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:57.492 15:45:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.492 15:45:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:57.492 15:45:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:57.492 15:45:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:57.492 15:45:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.492 15:45:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.492 15:45:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.492 15:45:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:57.492 15:45:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:57.492 15:45:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:57.492 15:45:36 -- common/autotest_common.sh@10 -- # set +x 00:21:57.492 15:45:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:57.492 15:45:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:57.492 15:45:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:57.492 15:45:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:57.492 15:45:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:57.492 15:45:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:57.492 15:45:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:57.492 15:45:36 -- nvmf/common.sh@294 -- # net_devs=() 00:21:57.492 15:45:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:57.492 15:45:36 -- nvmf/common.sh@295 -- # e810=() 00:21:57.492 15:45:36 -- nvmf/common.sh@295 -- # local -ga e810 00:21:57.492 15:45:36 -- nvmf/common.sh@296 -- # x722=() 00:21:57.492 15:45:36 -- nvmf/common.sh@296 -- # local -ga x722 00:21:57.492 15:45:36 -- nvmf/common.sh@297 -- # mlx=() 00:21:57.492 15:45:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:57.492 15:45:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:57.492 15:45:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:57.492 15:45:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:57.492 15:45:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:57.492 15:45:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:57.492 15:45:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:57.492 15:45:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:57.492 15:45:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:57.492 15:45:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:57.492 15:45:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:57.492 15:45:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:57.492 15:45:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:57.492 15:45:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:57.492 15:45:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:57.492 15:45:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:57.492 15:45:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:57.492 15:45:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:57.492 15:45:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:57.492 15:45:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:57.492 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:57.492 15:45:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:57.492 15:45:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:57.492 15:45:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.492 15:45:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.492 15:45:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:57.492 15:45:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:57.492 15:45:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:57.492 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:57.492 15:45:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:57.492 15:45:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:57.492 15:45:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:57.492 15:45:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:57.492 15:45:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:57.492 15:45:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:57.492 15:45:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:57.492 15:45:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:57.492 15:45:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:57.492 15:45:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.492 15:45:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:57.492 15:45:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.492 15:45:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:57.492 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:57.492 15:45:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.492 15:45:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:57.492 15:45:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:57.492 15:45:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:57.492 15:45:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:57.492 15:45:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:57.492 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:57.492 15:45:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:57.492 15:45:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:57.492 15:45:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:57.492 15:45:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:57.493 15:45:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:57.493 15:45:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:57.493 15:45:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:57.493 15:45:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:57.493 15:45:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:57.493 15:45:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:57.493 15:45:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:57.493 15:45:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:57.493 15:45:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:57.493 15:45:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:57.493 15:45:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:57.493 15:45:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:57.493 15:45:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:57.493 15:45:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:57.493 15:45:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:57.751 15:45:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:57.751 15:45:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:57.751 15:45:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:57.751 15:45:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:57.751 15:45:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:57.751 15:45:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:57.751 15:45:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:57.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:57.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:21:57.751 00:21:57.751 --- 10.0.0.2 ping statistics --- 00:21:57.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.751 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:21:57.751 15:45:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:57.751 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:57.751 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:21:57.751 00:21:57.751 --- 10.0.0.1 ping statistics --- 00:21:57.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:57.751 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:21:57.751 15:45:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:57.751 15:45:36 -- nvmf/common.sh@410 -- # return 0 00:21:57.751 15:45:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:57.751 15:45:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:57.751 15:45:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:57.751 15:45:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:57.751 15:45:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:57.751 15:45:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:57.751 15:45:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:57.751 15:45:36 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:57.751 15:45:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:57.751 15:45:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:57.751 15:45:36 -- common/autotest_common.sh@10 -- # set +x 00:21:57.751 15:45:36 -- nvmf/common.sh@469 -- # nvmfpid=2179326 00:21:57.751 15:45:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:57.751 15:45:36 -- nvmf/common.sh@470 -- # waitforlisten 2179326 00:21:57.751 15:45:36 -- common/autotest_common.sh@819 -- # '[' -z 2179326 ']' 00:21:57.751 15:45:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.751 15:45:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:57.751 15:45:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.751 15:45:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:57.751 15:45:36 -- common/autotest_common.sh@10 -- # set +x 00:21:57.751 [2024-07-10 15:45:37.022965] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:57.751 [2024-07-10 15:45:37.023068] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.751 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.751 [2024-07-10 15:45:37.092892] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:58.009 [2024-07-10 15:45:37.211729] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:58.009 [2024-07-10 15:45:37.211885] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:58.009 [2024-07-10 15:45:37.211904] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:58.010 [2024-07-10 15:45:37.211917] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:58.010 [2024-07-10 15:45:37.212020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.010 [2024-07-10 15:45:37.212136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:58.010 [2024-07-10 15:45:37.212201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:58.010 [2024-07-10 15:45:37.212203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.944 15:45:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:58.944 15:45:37 -- common/autotest_common.sh@852 -- # return 0 00:21:58.944 15:45:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:58.944 15:45:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:58.944 15:45:37 -- common/autotest_common.sh@10 -- # set +x 00:21:58.944 15:45:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:58.944 15:45:37 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:58.944 15:45:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.944 15:45:37 -- common/autotest_common.sh@10 -- # set +x 00:21:58.944 [2024-07-10 15:45:37.988978] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.944 15:45:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:58.944 15:45:37 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:58.944 15:45:37 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:58.944 15:45:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:58.944 15:45:37 -- common/autotest_common.sh@10 -- # set +x 00:21:58.944 15:45:37 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:58.944 15:45:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.944 15:45:38 -- target/shutdown.sh@28 -- # cat 00:21:58.944 15:45:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.944 15:45:38 -- target/shutdown.sh@28 -- # cat 00:21:58.944 15:45:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.944 15:45:38 -- target/shutdown.sh@28 -- # cat 00:21:58.944 15:45:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.944 15:45:38 -- target/shutdown.sh@28 -- # cat 00:21:58.944 15:45:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.944 15:45:38 -- target/shutdown.sh@28 -- # cat 00:21:58.944 15:45:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.944 15:45:38 -- target/shutdown.sh@28 -- # cat 00:21:58.944 15:45:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.944 15:45:38 -- target/shutdown.sh@28 -- # cat 00:21:58.944 15:45:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.944 15:45:38 -- target/shutdown.sh@28 -- # cat 00:21:58.944 15:45:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.944 15:45:38 -- target/shutdown.sh@28 -- # cat 00:21:58.944 15:45:38 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:58.944 15:45:38 -- target/shutdown.sh@28 -- # cat 00:21:58.944 15:45:38 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:58.944 15:45:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:58.944 15:45:38 -- common/autotest_common.sh@10 -- # set +x 00:21:58.944 Malloc1 00:21:58.944 [2024-07-10 15:45:38.063952] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.944 Malloc2 00:21:58.944 Malloc3 00:21:58.944 Malloc4 00:21:58.944 Malloc5 00:21:58.944 Malloc6 00:21:59.203 Malloc7 00:21:59.203 Malloc8 00:21:59.203 Malloc9 00:21:59.203 Malloc10 00:21:59.203 15:45:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.203 15:45:38 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:59.203 15:45:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:59.203 15:45:38 -- common/autotest_common.sh@10 -- # set +x 00:21:59.203 15:45:38 -- target/shutdown.sh@124 -- # perfpid=2179639 00:21:59.203 15:45:38 -- target/shutdown.sh@125 -- # waitforlisten 2179639 /var/tmp/bdevperf.sock 00:21:59.203 15:45:38 -- common/autotest_common.sh@819 -- # '[' -z 2179639 ']' 00:21:59.203 15:45:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:59.203 15:45:38 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:59.203 15:45:38 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:59.203 15:45:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:59.203 15:45:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:59.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:59.203 15:45:38 -- nvmf/common.sh@520 -- # config=() 00:21:59.203 15:45:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:59.203 15:45:38 -- nvmf/common.sh@520 -- # local subsystem config 00:21:59.203 15:45:38 -- common/autotest_common.sh@10 -- # set +x 00:21:59.203 15:45:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:59.203 15:45:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:59.203 { 00:21:59.203 "params": { 00:21:59.203 "name": "Nvme$subsystem", 00:21:59.203 "trtype": "$TEST_TRANSPORT", 00:21:59.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.203 "adrfam": "ipv4", 00:21:59.203 "trsvcid": "$NVMF_PORT", 00:21:59.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.203 "hdgst": ${hdgst:-false}, 00:21:59.204 "ddgst": ${ddgst:-false} 00:21:59.204 }, 00:21:59.204 "method": "bdev_nvme_attach_controller" 00:21:59.204 } 00:21:59.204 EOF 00:21:59.204 )") 00:21:59.204 15:45:38 -- nvmf/common.sh@542 -- # cat 00:21:59.204 15:45:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:59.204 15:45:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:59.204 { 00:21:59.204 "params": { 00:21:59.204 "name": "Nvme$subsystem", 00:21:59.204 "trtype": "$TEST_TRANSPORT", 00:21:59.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.204 "adrfam": "ipv4", 00:21:59.204 "trsvcid": "$NVMF_PORT", 00:21:59.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.204 "hdgst": ${hdgst:-false}, 00:21:59.204 "ddgst": ${ddgst:-false} 00:21:59.204 }, 00:21:59.204 "method": "bdev_nvme_attach_controller" 00:21:59.204 } 00:21:59.204 EOF 00:21:59.204 )") 00:21:59.204 15:45:38 -- nvmf/common.sh@542 -- # cat 00:21:59.204 15:45:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:59.204 15:45:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:59.204 { 00:21:59.204 "params": { 00:21:59.204 "name": "Nvme$subsystem", 00:21:59.204 "trtype": "$TEST_TRANSPORT", 00:21:59.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.204 "adrfam": "ipv4", 00:21:59.204 "trsvcid": "$NVMF_PORT", 00:21:59.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.204 "hdgst": ${hdgst:-false}, 00:21:59.204 "ddgst": ${ddgst:-false} 00:21:59.204 }, 00:21:59.204 "method": "bdev_nvme_attach_controller" 00:21:59.204 } 00:21:59.204 EOF 00:21:59.204 )") 00:21:59.204 15:45:38 -- nvmf/common.sh@542 -- # cat 00:21:59.204 15:45:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:59.204 15:45:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:59.204 { 00:21:59.204 "params": { 00:21:59.204 "name": "Nvme$subsystem", 00:21:59.204 "trtype": "$TEST_TRANSPORT", 00:21:59.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.204 "adrfam": "ipv4", 00:21:59.204 "trsvcid": "$NVMF_PORT", 00:21:59.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.204 "hdgst": ${hdgst:-false}, 00:21:59.204 "ddgst": ${ddgst:-false} 00:21:59.204 }, 00:21:59.204 "method": "bdev_nvme_attach_controller" 00:21:59.204 } 00:21:59.204 EOF 00:21:59.204 )") 00:21:59.204 15:45:38 -- nvmf/common.sh@542 -- # cat 00:21:59.204 15:45:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:59.204 15:45:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:59.204 { 00:21:59.204 "params": { 00:21:59.204 "name": "Nvme$subsystem", 00:21:59.204 "trtype": "$TEST_TRANSPORT", 00:21:59.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.204 "adrfam": "ipv4", 00:21:59.204 "trsvcid": "$NVMF_PORT", 00:21:59.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.204 "hdgst": ${hdgst:-false}, 00:21:59.204 "ddgst": ${ddgst:-false} 00:21:59.204 }, 00:21:59.204 "method": "bdev_nvme_attach_controller" 00:21:59.204 } 00:21:59.204 EOF 00:21:59.204 )") 00:21:59.204 15:45:38 -- nvmf/common.sh@542 -- # cat 00:21:59.204 15:45:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:59.204 15:45:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:59.204 { 00:21:59.204 "params": { 00:21:59.204 "name": "Nvme$subsystem", 00:21:59.204 "trtype": "$TEST_TRANSPORT", 00:21:59.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.204 "adrfam": "ipv4", 00:21:59.204 "trsvcid": "$NVMF_PORT", 00:21:59.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.204 "hdgst": ${hdgst:-false}, 00:21:59.204 "ddgst": ${ddgst:-false} 00:21:59.204 }, 00:21:59.204 "method": "bdev_nvme_attach_controller" 00:21:59.204 } 00:21:59.204 EOF 00:21:59.204 )") 00:21:59.204 15:45:38 -- nvmf/common.sh@542 -- # cat 00:21:59.204 15:45:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:59.204 15:45:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:59.204 { 00:21:59.204 "params": { 00:21:59.204 "name": "Nvme$subsystem", 00:21:59.204 "trtype": "$TEST_TRANSPORT", 00:21:59.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.204 "adrfam": "ipv4", 00:21:59.204 "trsvcid": "$NVMF_PORT", 00:21:59.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.204 "hdgst": ${hdgst:-false}, 00:21:59.204 "ddgst": ${ddgst:-false} 00:21:59.204 }, 00:21:59.204 "method": "bdev_nvme_attach_controller" 00:21:59.204 } 00:21:59.204 EOF 00:21:59.204 )") 00:21:59.204 15:45:38 -- nvmf/common.sh@542 -- # cat 00:21:59.204 15:45:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:59.204 15:45:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:59.204 { 00:21:59.204 "params": { 00:21:59.204 "name": "Nvme$subsystem", 00:21:59.204 "trtype": "$TEST_TRANSPORT", 00:21:59.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.204 "adrfam": "ipv4", 00:21:59.204 "trsvcid": "$NVMF_PORT", 00:21:59.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.204 "hdgst": ${hdgst:-false}, 00:21:59.204 "ddgst": ${ddgst:-false} 00:21:59.204 }, 00:21:59.204 "method": "bdev_nvme_attach_controller" 00:21:59.204 } 00:21:59.204 EOF 00:21:59.204 )") 00:21:59.204 15:45:38 -- nvmf/common.sh@542 -- # cat 00:21:59.204 15:45:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:59.204 15:45:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:59.204 { 00:21:59.204 "params": { 00:21:59.204 "name": "Nvme$subsystem", 00:21:59.204 "trtype": "$TEST_TRANSPORT", 00:21:59.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.204 "adrfam": "ipv4", 00:21:59.204 "trsvcid": "$NVMF_PORT", 00:21:59.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.204 "hdgst": ${hdgst:-false}, 00:21:59.204 "ddgst": ${ddgst:-false} 00:21:59.204 }, 00:21:59.204 "method": "bdev_nvme_attach_controller" 00:21:59.204 } 00:21:59.204 EOF 00:21:59.204 )") 00:21:59.204 15:45:38 -- nvmf/common.sh@542 -- # cat 00:21:59.204 15:45:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:21:59.204 15:45:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:21:59.204 { 00:21:59.204 "params": { 00:21:59.204 "name": "Nvme$subsystem", 00:21:59.204 "trtype": "$TEST_TRANSPORT", 00:21:59.204 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:59.204 "adrfam": "ipv4", 00:21:59.204 "trsvcid": "$NVMF_PORT", 00:21:59.204 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:59.204 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:59.204 "hdgst": ${hdgst:-false}, 00:21:59.204 "ddgst": ${ddgst:-false} 00:21:59.204 }, 00:21:59.204 "method": "bdev_nvme_attach_controller" 00:21:59.204 } 00:21:59.204 EOF 00:21:59.204 )") 00:21:59.204 15:45:38 -- nvmf/common.sh@542 -- # cat 00:21:59.204 15:45:38 -- nvmf/common.sh@544 -- # jq . 00:21:59.204 15:45:38 -- nvmf/common.sh@545 -- # IFS=, 00:21:59.204 15:45:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:21:59.204 "params": { 00:21:59.204 "name": "Nvme1", 00:21:59.204 "trtype": "tcp", 00:21:59.204 "traddr": "10.0.0.2", 00:21:59.204 "adrfam": "ipv4", 00:21:59.204 "trsvcid": "4420", 00:21:59.204 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.204 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:59.204 "hdgst": false, 00:21:59.204 "ddgst": false 00:21:59.204 }, 00:21:59.204 "method": "bdev_nvme_attach_controller" 00:21:59.204 },{ 00:21:59.204 "params": { 00:21:59.204 "name": "Nvme2", 00:21:59.204 "trtype": "tcp", 00:21:59.204 "traddr": "10.0.0.2", 00:21:59.204 "adrfam": "ipv4", 00:21:59.204 "trsvcid": "4420", 00:21:59.204 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:59.204 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:59.204 "hdgst": false, 00:21:59.204 "ddgst": false 00:21:59.204 }, 00:21:59.204 "method": "bdev_nvme_attach_controller" 00:21:59.204 },{ 00:21:59.204 "params": { 00:21:59.204 "name": "Nvme3", 00:21:59.204 "trtype": "tcp", 00:21:59.204 "traddr": "10.0.0.2", 00:21:59.204 "adrfam": "ipv4", 00:21:59.204 "trsvcid": "4420", 00:21:59.204 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:59.204 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:59.204 "hdgst": false, 00:21:59.204 "ddgst": false 00:21:59.204 }, 00:21:59.204 "method": "bdev_nvme_attach_controller" 00:21:59.204 },{ 00:21:59.204 "params": { 00:21:59.204 "name": "Nvme4", 00:21:59.204 "trtype": "tcp", 00:21:59.204 "traddr": "10.0.0.2", 00:21:59.204 "adrfam": "ipv4", 00:21:59.204 "trsvcid": "4420", 00:21:59.204 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:59.204 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:59.204 "hdgst": false, 00:21:59.204 "ddgst": false 00:21:59.204 }, 00:21:59.204 "method": "bdev_nvme_attach_controller" 00:21:59.204 },{ 00:21:59.204 "params": { 00:21:59.204 "name": "Nvme5", 00:21:59.204 "trtype": "tcp", 00:21:59.204 "traddr": "10.0.0.2", 00:21:59.204 "adrfam": "ipv4", 00:21:59.204 "trsvcid": "4420", 00:21:59.204 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:59.204 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:59.204 "hdgst": false, 00:21:59.204 "ddgst": false 00:21:59.204 }, 00:21:59.204 "method": "bdev_nvme_attach_controller" 00:21:59.204 },{ 00:21:59.204 "params": { 00:21:59.204 "name": "Nvme6", 00:21:59.204 "trtype": "tcp", 00:21:59.204 "traddr": "10.0.0.2", 00:21:59.204 "adrfam": "ipv4", 00:21:59.204 "trsvcid": "4420", 00:21:59.204 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:59.204 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:59.204 "hdgst": false, 00:21:59.205 "ddgst": false 00:21:59.205 }, 00:21:59.205 "method": "bdev_nvme_attach_controller" 00:21:59.205 },{ 00:21:59.205 "params": { 00:21:59.205 "name": "Nvme7", 00:21:59.205 "trtype": "tcp", 00:21:59.205 "traddr": "10.0.0.2", 00:21:59.205 "adrfam": "ipv4", 00:21:59.205 "trsvcid": "4420", 00:21:59.205 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:59.205 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:59.205 "hdgst": false, 00:21:59.205 "ddgst": false 00:21:59.205 }, 00:21:59.205 "method": "bdev_nvme_attach_controller" 00:21:59.205 },{ 00:21:59.205 "params": { 00:21:59.205 "name": "Nvme8", 00:21:59.205 "trtype": "tcp", 00:21:59.205 "traddr": "10.0.0.2", 00:21:59.205 "adrfam": "ipv4", 00:21:59.205 "trsvcid": "4420", 00:21:59.205 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:59.205 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:59.205 "hdgst": false, 00:21:59.205 "ddgst": false 00:21:59.205 }, 00:21:59.205 "method": "bdev_nvme_attach_controller" 00:21:59.205 },{ 00:21:59.205 "params": { 00:21:59.205 "name": "Nvme9", 00:21:59.205 "trtype": "tcp", 00:21:59.205 "traddr": "10.0.0.2", 00:21:59.205 "adrfam": "ipv4", 00:21:59.205 "trsvcid": "4420", 00:21:59.205 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:59.205 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:59.205 "hdgst": false, 00:21:59.205 "ddgst": false 00:21:59.205 }, 00:21:59.205 "method": "bdev_nvme_attach_controller" 00:21:59.205 },{ 00:21:59.205 "params": { 00:21:59.205 "name": "Nvme10", 00:21:59.205 "trtype": "tcp", 00:21:59.205 "traddr": "10.0.0.2", 00:21:59.205 "adrfam": "ipv4", 00:21:59.205 "trsvcid": "4420", 00:21:59.205 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:59.205 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:59.205 "hdgst": false, 00:21:59.205 "ddgst": false 00:21:59.205 }, 00:21:59.205 "method": "bdev_nvme_attach_controller" 00:21:59.205 }' 00:21:59.205 [2024-07-10 15:45:38.565667] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:59.205 [2024-07-10 15:45:38.565769] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179639 ] 00:21:59.463 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.463 [2024-07-10 15:45:38.628692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.463 [2024-07-10 15:45:38.736859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.362 Running I/O for 10 seconds... 00:22:01.945 15:45:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:01.945 15:45:41 -- common/autotest_common.sh@852 -- # return 0 00:22:01.945 15:45:41 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:01.945 15:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.945 15:45:41 -- common/autotest_common.sh@10 -- # set +x 00:22:01.945 15:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.945 15:45:41 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:01.945 15:45:41 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:22:01.945 15:45:41 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:22:01.945 15:45:41 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:22:01.945 15:45:41 -- target/shutdown.sh@57 -- # local ret=1 00:22:01.945 15:45:41 -- target/shutdown.sh@58 -- # local i 00:22:01.945 15:45:41 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:22:01.945 15:45:41 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:22:01.945 15:45:41 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:22:01.945 15:45:41 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:22:01.945 15:45:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.945 15:45:41 -- common/autotest_common.sh@10 -- # set +x 00:22:01.945 15:45:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.945 15:45:41 -- target/shutdown.sh@60 -- # read_io_count=256 00:22:01.945 15:45:41 -- target/shutdown.sh@63 -- # '[' 256 -ge 100 ']' 00:22:01.945 15:45:41 -- target/shutdown.sh@64 -- # ret=0 00:22:01.945 15:45:41 -- target/shutdown.sh@65 -- # break 00:22:01.945 15:45:41 -- target/shutdown.sh@69 -- # return 0 00:22:01.945 15:45:41 -- target/shutdown.sh@134 -- # killprocess 2179326 00:22:01.945 15:45:41 -- common/autotest_common.sh@926 -- # '[' -z 2179326 ']' 00:22:01.945 15:45:41 -- common/autotest_common.sh@930 -- # kill -0 2179326 00:22:01.945 15:45:41 -- common/autotest_common.sh@931 -- # uname 00:22:01.945 15:45:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:01.945 15:45:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2179326 00:22:01.945 15:45:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:01.945 15:45:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:01.946 15:45:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2179326' 00:22:01.946 killing process with pid 2179326 00:22:01.946 15:45:41 -- common/autotest_common.sh@945 -- # kill 2179326 00:22:01.946 15:45:41 -- common/autotest_common.sh@950 -- # wait 2179326 00:22:01.946 [2024-07-10 15:45:41.152653] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.152782] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.152805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.152817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.152829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.152841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.152853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.152865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.152877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.152888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.152901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.152913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.152925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.152937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.152949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.152961] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.152973] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.152985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153080] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153171] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153183] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153263] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153353] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153447] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153482] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.153553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8a70 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.154691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6590 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.154714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6590 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.155875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.155910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.155930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.155942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.155955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.155967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.155980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.155991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.156010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.156022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.156035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.156047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.156059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.156072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.156084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.156097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.156113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.156125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.156136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.156148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.156161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.156173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.946 [2024-07-10 15:45:41.156185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156245] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156346] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156397] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156409] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156422] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.156722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6a40 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.157854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.157886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.157900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.157912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.157924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.157936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.157949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.157962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.157974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.157986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.157999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158048] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158138] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158150] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158239] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.947 [2024-07-10 15:45:41.158547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.158559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.158571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.158587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.158600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.158612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.158624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.158636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.158648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.158660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.158672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.158683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f6ed0 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159506] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159841] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159900] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159936] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.159995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160194] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160216] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160331] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-10 15:45:41.160387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with tid:0 cdw10:00000000 cdw11:00000000 00:22:01.948 he state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with t[2024-07-10 15:45:41.160423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:22:01.948 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.948 [2024-07-10 15:45:41.160446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.948 [2024-07-10 15:45:41.160459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.948 [2024-07-10 15:45:41.160472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.948 [2024-07-10 15:45:41.160484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-10 15:45:41.160496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.948 he state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.948 [2024-07-10 15:45:41.160522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7380 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.948 [2024-07-10 15:45:41.160540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd32b0 is same with the state(5) to be set 00:22:01.948 [2024-07-10 15:45:41.160692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.948 [2024-07-10 15:45:41.160715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.948 [2024-07-10 15:45:41.160732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.948 [2024-07-10 15:45:41.160746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.948 [2024-07-10 15:45:41.160761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.948 [2024-07-10 15:45:41.160774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.948 [2024-07-10 15:45:41.160788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.948 [2024-07-10 15:45:41.160801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.160814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100ba60 is same with the state(5) to be set 00:22:01.949 [2024-07-10 15:45:41.160861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.949 [2024-07-10 15:45:41.160882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.160970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.949 [2024-07-10 15:45:41.160985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.160999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.949 [2024-07-10 15:45:41.161013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161026] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.949 [2024-07-10 15:45:41.161040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161053] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda8a50 is same with the state(5) to be set 00:22:01.949 [2024-07-10 15:45:41.161101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.949 [2024-07-10 15:45:41.161121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.949 [2024-07-10 15:45:41.161150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.949 [2024-07-10 15:45:41.161178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.949 [2024-07-10 15:45:41.161206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45210 is same with the state(5) to be set 00:22:01.949 [2024-07-10 15:45:41.161272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.949 [2024-07-10 15:45:41.161292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.949 [2024-07-10 15:45:41.161319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161333] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.949 [2024-07-10 15:45:41.161346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.949 [2024-07-10 15:45:41.161375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe47f70 is same with the state(5) to be set 00:22:01.949 [2024-07-10 15:45:41.161480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.161504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.161546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.161577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.161606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.161635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.161663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.161693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.161737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.161766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.161795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.161825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.161853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.161882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.161910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.161940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.161984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.161998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.162011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.162026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.162039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.162054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.162067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.162082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.162095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.162113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.162127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.162142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.162155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.162170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.949 [2024-07-10 15:45:41.162184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.949 [2024-07-10 15:45:41.162199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.162973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.162986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.163001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.163014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.163029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.163042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.163056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.163070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.163085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.163097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.163112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.163125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.163140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.163154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.163168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.163182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.163196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.163209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.163224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.163240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.163288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.163314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.163341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.163367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.163393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.163439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.163468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.163494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.950 [2024-07-10 15:45:41.163521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:45568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.950 [2024-07-10 15:45:41.163545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.951 [2024-07-10 15:45:41.163572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.951 [2024-07-10 15:45:41.163598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.951 [2024-07-10 15:45:41.163623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbcf50 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.163710] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfbcf50 was disconnected and freed. reset controller. 00:22:01.951 [2024-07-10 15:45:41.165402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165460] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165485] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165672] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165804] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165817] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165829] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165864] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165946] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165959] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.165992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.166004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.166015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.166027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.166038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.166050] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.166061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.166073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.166084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.166096] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.166107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.166118] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.166130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.166141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.166153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.166164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.166175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.166187] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.166198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7830 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167453] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167585] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167597] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167622] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167738] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.951 [2024-07-10 15:45:41.167750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.167762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.167774] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.167790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.167803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.167815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.167830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.167859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.167872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.167884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.167895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.167907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.167919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.167930] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.167942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.167953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.167964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.167976] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.167987] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.167999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.168010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.168021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.168033] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.168044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.168055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.168067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.168079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.168090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.168102] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.168113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.168125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.168136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.168147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.168161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.168174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f7cc0 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169262] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169478] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169565] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169577] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169594] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169656] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169668] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169751] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169811] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169834] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.169994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.952 [2024-07-10 15:45:41.170005] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f8150 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170690] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170778] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170803] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170826] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170838] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170850] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170886] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170938] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170969] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170981] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.170993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171067] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171165] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171224] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171333] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171368] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171380] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.171547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9f85e0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.172421] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.953 [2024-07-10 15:45:41.172524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.953 [2024-07-10 15:45:41.172544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.953 [2024-07-10 15:45:41.172558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.953 [2024-07-10 15:45:41.172572] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.953 [2024-07-10 15:45:41.172585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.953 [2024-07-10 15:45:41.172599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.953 [2024-07-10 15:45:41.172612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.953 [2024-07-10 15:45:41.172625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10068c0 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.172673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.953 [2024-07-10 15:45:41.172698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.953 [2024-07-10 15:45:41.172719] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.953 [2024-07-10 15:45:41.172732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.953 [2024-07-10 15:45:41.172746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.953 [2024-07-10 15:45:41.172759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.953 [2024-07-10 15:45:41.172772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.953 [2024-07-10 15:45:41.172785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.953 [2024-07-10 15:45:41.172798] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3ee60 is same with the state(5) to be set 00:22:01.953 [2024-07-10 15:45:41.172831] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd32b0 (9): Bad file descriptor 00:22:01.953 [2024-07-10 15:45:41.172884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.953 [2024-07-10 15:45:41.172904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.953 [2024-07-10 15:45:41.172918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.954 [2024-07-10 15:45:41.172932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.172946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.954 [2024-07-10 15:45:41.172960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.172975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.954 [2024-07-10 15:45:41.172988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.173001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3ea30 is same with the state(5) to be set 00:22:01.954 [2024-07-10 15:45:41.173045] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.954 [2024-07-10 15:45:41.173065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.173080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.954 [2024-07-10 15:45:41.173093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.173107] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.954 [2024-07-10 15:45:41.173120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.173134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.954 [2024-07-10 15:45:41.173147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.173164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd2e20 is same with the state(5) to be set 00:22:01.954 [2024-07-10 15:45:41.173211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.954 [2024-07-10 15:45:41.173231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.173246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.954 [2024-07-10 15:45:41.173259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.173273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.954 [2024-07-10 15:45:41.173286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.173299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:01.954 [2024-07-10 15:45:41.173312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.173324] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006490 is same with the state(5) to be set 00:22:01.954 [2024-07-10 15:45:41.173352] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100ba60 (9): Bad file descriptor 00:22:01.954 [2024-07-10 15:45:41.173382] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda8a50 (9): Bad file descriptor 00:22:01.954 [2024-07-10 15:45:41.173409] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe45210 (9): Bad file descriptor 00:22:01.954 [2024-07-10 15:45:41.173457] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe47f70 (9): Bad file descriptor 00:22:01.954 [2024-07-10 15:45:41.174629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.174653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.174674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.174689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.174705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.174719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.174734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.174748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.174763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.174777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.174792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.174805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.174828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.174843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.174859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.174873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.174888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.174902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.174918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.174931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.174946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.174960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.174975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.174989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.175004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.175018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.175033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.175047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.175062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.175075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.175091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.175104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.175120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.175134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.175149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.175163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.175178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.175195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.175211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.175225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.175240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.175254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.954 [2024-07-10 15:45:41.175269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.954 [2024-07-10 15:45:41.175282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.175984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.175998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.176014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.176028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.176043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.176058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.176075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.176088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.176104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.176117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.176133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.176147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.176163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.176176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.176192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.176206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.176222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.176235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.176251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.176265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.176280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.176294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.176310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.176325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.176344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.176359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.176374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.176388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.176404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.176432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.176450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.176465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.176481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.176495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.176510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.176524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.176540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.176554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.176570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.955 [2024-07-10 15:45:41.176584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.955 [2024-07-10 15:45:41.176599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe34e70 is same with the state(5) to be set 00:22:01.955 [2024-07-10 15:45:41.176666] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xe34e70 was disconnected and freed. reset controller. 00:22:01.956 [2024-07-10 15:45:41.176864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.176896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.176917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.176933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.176951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.176965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.176982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.177982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.177998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.178013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.178028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.178042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.178058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.178071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.178092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.178106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.178122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.956 [2024-07-10 15:45:41.178135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.956 [2024-07-10 15:45:41.178151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.178890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.178904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.179011] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf84aa0 was disconnected and freed. reset controller. 00:22:01.957 [2024-07-10 15:45:41.179218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.179242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.179262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.179284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.179300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.179315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.179330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.179344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.179360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.179373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.179390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.179404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.179431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.179447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.179463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.179478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.179495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.179509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.179524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.179538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.179553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.179566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.179582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.179596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.179612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.179625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.179641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.179654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.179674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.179689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.179705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.179729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.957 [2024-07-10 15:45:41.179745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.957 [2024-07-10 15:45:41.179759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.179776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.179789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.179806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.179819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.179836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.179849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.179866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.179880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.179896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.179909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.179926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.179940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.179957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.179971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.179987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.958 [2024-07-10 15:45:41.180867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.958 [2024-07-10 15:45:41.180880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.180901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.180915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.180931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.180945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.180961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.180974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.180990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.181004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.181019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.181033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.181049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.181066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.181081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.181096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.181112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.181125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.181140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.181153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.181168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.181182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.181197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.181211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.181314] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf88bc0 was disconnected and freed. reset controller. 00:22:01.959 [2024-07-10 15:45:41.181504] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.959 [2024-07-10 15:45:41.185473] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:01.959 [2024-07-10 15:45:41.186214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.959 [2024-07-10 15:45:41.186359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.959 [2024-07-10 15:45:41.186385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe45210 with addr=10.0.0.2, port=4420 00:22:01.959 [2024-07-10 15:45:41.186404] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45210 is same with the state(5) to be set 00:22:01.959 [2024-07-10 15:45:41.186451] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10068c0 (9): Bad file descriptor 00:22:01.959 [2024-07-10 15:45:41.186484] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3ee60 (9): Bad file descriptor 00:22:01.959 [2024-07-10 15:45:41.186529] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3ea30 (9): Bad file descriptor 00:22:01.959 [2024-07-10 15:45:41.186558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd2e20 (9): Bad file descriptor 00:22:01.959 [2024-07-10 15:45:41.186590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006490 (9): Bad file descriptor 00:22:01.959 [2024-07-10 15:45:41.186996] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.959 [2024-07-10 15:45:41.187077] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.959 [2024-07-10 15:45:41.187360] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.959 [2024-07-10 15:45:41.187390] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:01.959 [2024-07-10 15:45:41.187614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.959 [2024-07-10 15:45:41.187761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.959 [2024-07-10 15:45:41.187786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100ba60 with addr=10.0.0.2, port=4420 00:22:01.959 [2024-07-10 15:45:41.187803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100ba60 is same with the state(5) to be set 00:22:01.959 [2024-07-10 15:45:41.187822] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe45210 (9): Bad file descriptor 00:22:01.959 [2024-07-10 15:45:41.188187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.959 [2024-07-10 15:45:41.188871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.959 [2024-07-10 15:45:41.188887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.188901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.188917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.188930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.188946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.188960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.188976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.188989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.189981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.189997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.190011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.190026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.190040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.190056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.190070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.960 [2024-07-10 15:45:41.190085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.960 [2024-07-10 15:45:41.190099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.190115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.190128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.190144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.190158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.190174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.190188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.190202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbd270 is same with the state(5) to be set 00:22:01.961 [2024-07-10 15:45:41.191485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.191509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.191531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.191547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.191563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.191582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.191598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.191613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.191629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.191642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.191658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.191672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.191688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.191701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.191718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.191731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.191747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.191761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.191783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.191796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.191812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.191826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.191841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.191855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.191870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.191884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.191899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.191913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.191929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.191943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.191962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.191976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.191992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.961 [2024-07-10 15:45:41.192622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.961 [2024-07-10 15:45:41.192637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.192651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.192667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.192680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.192696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.192709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.192730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.192752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.192768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.192782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.192798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.192811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.192827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.192841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.192857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.192870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.192886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.192900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.192916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.192930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.192946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.192961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.192977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.192990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.193006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.193020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.193036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.193051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.193067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.193081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.193097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.193115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.193131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.193145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.193161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.193175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.193191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.193204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.193220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.193234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.193250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.193263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.193279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.193293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.193309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.193322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.193339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.193353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.193369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.193383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.193400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.193436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.193455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.193469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.193484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbe7f0 is same with the state(5) to be set 00:22:01.962 [2024-07-10 15:45:41.195036] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.962 [2024-07-10 15:45:41.195503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.195527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.195550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.195566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.195583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.195597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.195613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.195628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.195644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.195657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.195673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.195687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.195704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.962 [2024-07-10 15:45:41.195718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.962 [2024-07-10 15:45:41.195736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.195750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.195766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.195780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.195795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.195809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.195825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.195839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.195855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.195869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.195884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.195898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.195919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.195934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.195950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.195964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.195981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.195995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.196981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.963 [2024-07-10 15:45:41.196996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.963 [2024-07-10 15:45:41.197010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.197026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.197039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.197056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.197077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.197094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.197107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.197123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.197137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.197153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.197167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.197183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.197196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.197212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.197226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.197242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.197256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.197271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.197285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.197301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.197315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.197331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.197344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.197360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.197374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.197390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.197404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.197422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.197443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.197463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.197478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.197493] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2047c50 is same with the state(5) to be set 00:22:01.964 [2024-07-10 15:45:41.199710] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:22:01.964 [2024-07-10 15:45:41.199752] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:22:01.964 [2024-07-10 15:45:41.199774] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:22:01.964 [2024-07-10 15:45:41.199792] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:22:01.964 [2024-07-10 15:45:41.200032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.964 [2024-07-10 15:45:41.200182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.964 [2024-07-10 15:45:41.200208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10068c0 with addr=10.0.0.2, port=4420 00:22:01.964 [2024-07-10 15:45:41.200224] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10068c0 is same with the state(5) to be set 00:22:01.964 [2024-07-10 15:45:41.200250] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100ba60 (9): Bad file descriptor 00:22:01.964 [2024-07-10 15:45:41.200268] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.964 [2024-07-10 15:45:41.200282] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.964 [2024-07-10 15:45:41.200298] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.964 [2024-07-10 15:45:41.200360] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.964 [2024-07-10 15:45:41.200421] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.964 [2024-07-10 15:45:41.200454] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10068c0 (9): Bad file descriptor 00:22:01.964 [2024-07-10 15:45:41.200615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.200639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.200669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.200687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.200719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.200736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.200757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.200786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.200807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.200828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.200850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.200866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.200886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.200902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.200923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.200939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.200959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.200975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.200995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.201011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.201031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.201047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.201067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.201083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.201103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.201120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.201140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.201156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.201176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.201192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.201212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.201228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.201248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.201265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.201289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.201306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.201326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.201342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.201363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.964 [2024-07-10 15:45:41.201380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.964 [2024-07-10 15:45:41.201402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.201438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.201462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.201478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.201498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.201514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.201536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.201553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.201574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.201590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.201612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.201628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.201648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.201664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.201686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.201702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.201723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.201740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.201761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.201781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.201803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.201819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.201840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.201856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.201877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.201893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.201915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.201931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.201951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.201967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.201988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.202976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.202992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.203013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.965 [2024-07-10 15:45:41.203029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.965 [2024-07-10 15:45:41.203050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.203066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.203911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf875e0 is same with the state(5) to be set 00:22:01.966 [2024-07-10 15:45:41.203993] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf875e0 was disconnected and freed. reset controller. 00:22:01.966 [2024-07-10 15:45:41.204010] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.966 [2024-07-10 15:45:41.204111] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:22:01.966 [2024-07-10 15:45:41.204158] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.966 [2024-07-10 15:45:41.204322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.966 [2024-07-10 15:45:41.204468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.966 [2024-07-10 15:45:41.204494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3ea30 with addr=10.0.0.2, port=4420 00:22:01.966 [2024-07-10 15:45:41.204516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3ea30 is same with the state(5) to be set 00:22:01.966 [2024-07-10 15:45:41.204642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.966 [2024-07-10 15:45:41.204772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.966 [2024-07-10 15:45:41.204804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe47f70 with addr=10.0.0.2, port=4420 00:22:01.966 [2024-07-10 15:45:41.204820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe47f70 is same with the state(5) to be set 00:22:01.966 [2024-07-10 15:45:41.204936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.966 [2024-07-10 15:45:41.205074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.966 [2024-07-10 15:45:41.205106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xda8a50 with addr=10.0.0.2, port=4420 00:22:01.966 [2024-07-10 15:45:41.205121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xda8a50 is same with the state(5) to be set 00:22:01.966 [2024-07-10 15:45:41.205242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.966 [2024-07-10 15:45:41.205374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.966 [2024-07-10 15:45:41.205398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd32b0 with addr=10.0.0.2, port=4420 00:22:01.966 [2024-07-10 15:45:41.205430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd32b0 is same with the state(5) to be set 00:22:01.966 [2024-07-10 15:45:41.205450] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:01.966 [2024-07-10 15:45:41.205463] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:01.966 [2024-07-10 15:45:41.205478] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:01.966 [2024-07-10 15:45:41.206146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.206985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.206999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.207015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.207029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.207045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.207059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.207075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.966 [2024-07-10 15:45:41.207089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.966 [2024-07-10 15:45:41.207105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:41600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:41728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:41984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:43264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:43392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:43520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:43648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:43776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.207978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.207994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.208008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.208023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:44928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.208037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.208053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.208067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.208083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:45184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.208097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.208113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.208127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.208142] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf86040 is same with the state(5) to be set 00:22:01.967 [2024-07-10 15:45:41.209448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.209472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.209493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.209509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.209525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.209540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.209556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.967 [2024-07-10 15:45:41.209570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.967 [2024-07-10 15:45:41.209586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.209605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.209621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.209635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.209651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.209664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.209681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.209695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.209711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.209733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.209748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.209762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.209777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.209801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.209816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.209830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.209845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.209859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.209875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.209889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.209904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.209918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.209934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.209948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.209964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.209978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.209997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.968 [2024-07-10 15:45:41.210905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.968 [2024-07-10 15:45:41.210921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-10 15:45:41.210935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-10 15:45:41.210951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-10 15:45:41.210964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-10 15:45:41.210980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-10 15:45:41.210993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-10 15:45:41.211009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-10 15:45:41.211022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-10 15:45:41.211038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-10 15:45:41.211051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-10 15:45:41.211067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-10 15:45:41.211080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-10 15:45:41.211096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-10 15:45:41.211110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-10 15:45:41.211125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-10 15:45:41.211142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-10 15:45:41.211158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-10 15:45:41.211181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-10 15:45:41.211196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-10 15:45:41.211210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-10 15:45:41.211225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-10 15:45:41.211239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-10 15:45:41.211261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-10 15:45:41.211274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-10 15:45:41.211289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-10 15:45:41.211303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-10 15:45:41.211319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-10 15:45:41.211333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-10 15:45:41.211349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-10 15:45:41.211363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-10 15:45:41.211379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-10 15:45:41.211393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-10 15:45:41.211408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:01.969 [2024-07-10 15:45:41.211422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:01.969 [2024-07-10 15:45:41.211444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ea4f80 is same with the state(5) to be set 00:22:01.969 [2024-07-10 15:45:41.213323] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.969 [2024-07-10 15:45:41.213355] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:22:01.969 [2024-07-10 15:45:41.213377] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:22:01.969 task offset: 40576 on job bdev=Nvme1n1 fails 00:22:01.969 00:22:01.969 Latency(us) 00:22:01.969 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:01.969 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.969 Job: Nvme1n1 ended in about 0.84 seconds with error 00:22:01.969 Verification LBA range: start 0x0 length 0x400 00:22:01.969 Nvme1n1 : 0.84 348.19 21.76 75.80 0.00 150033.10 14466.47 166995.44 00:22:01.969 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.969 Job: Nvme2n1 ended in about 0.85 seconds with error 00:22:01.969 Verification LBA range: start 0x0 length 0x400 00:22:01.969 Nvme2n1 : 0.85 341.33 21.33 75.07 0.00 151421.73 10534.31 157674.76 00:22:01.969 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.969 Job: Nvme3n1 ended in about 0.86 seconds with error 00:22:01.969 Verification LBA range: start 0x0 length 0x400 00:22:01.969 Nvme3n1 : 0.86 337.95 21.12 74.33 0.00 151561.08 78449.02 125052.40 00:22:01.969 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.969 Job: Nvme4n1 ended in about 0.86 seconds with error 00:22:01.969 Verification LBA range: start 0x0 length 0x400 00:22:01.969 Nvme4n1 : 0.86 336.68 21.04 74.05 0.00 150777.24 74177.04 125829.12 00:22:01.969 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.969 Job: Nvme5n1 ended in about 0.85 seconds with error 00:22:01.969 Verification LBA range: start 0x0 length 0x400 00:22:01.969 Nvme5n1 : 0.85 340.82 21.30 74.96 0.00 147504.07 42137.22 129712.73 00:22:01.969 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.969 Job: Nvme6n1 ended in about 0.88 seconds with error 00:22:01.969 Verification LBA range: start 0x0 length 0x400 00:22:01.969 Nvme6n1 : 0.88 331.07 20.69 72.81 0.00 150705.59 80390.83 126605.84 00:22:01.969 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.969 Job: Nvme7n1 ended in about 0.87 seconds with error 00:22:01.969 Verification LBA range: start 0x0 length 0x400 00:22:01.969 Nvme7n1 : 0.87 293.04 18.31 73.26 0.00 164446.81 20583.16 139033.41 00:22:01.969 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.969 Job: Nvme8n1 ended in about 0.86 seconds with error 00:22:01.969 Verification LBA range: start 0x0 length 0x400 00:22:01.969 Nvme8n1 : 0.86 340.34 21.27 74.85 0.00 143752.28 24175.50 128936.01 00:22:01.969 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.969 Job: Nvme9n1 ended in about 0.88 seconds with error 00:22:01.969 Verification LBA range: start 0x0 length 0x400 00:22:01.969 Nvme9n1 : 0.88 284.51 17.78 72.54 0.00 166036.25 100973.99 128936.01 00:22:01.969 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:22:01.969 Job: Nvme10n1 ended in about 0.87 seconds with error 00:22:01.969 Verification LBA range: start 0x0 length 0x400 00:22:01.969 Nvme10n1 : 0.87 290.21 18.14 73.70 0.00 161173.24 11553.75 129712.73 00:22:01.969 =================================================================================================================== 00:22:01.969 Total : 3244.13 202.76 741.37 0.00 153401.66 10534.31 166995.44 00:22:01.969 [2024-07-10 15:45:41.240465] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:01.969 [2024-07-10 15:45:41.240550] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:22:01.969 [2024-07-10 15:45:41.240646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3ea30 (9): Bad file descriptor 00:22:01.969 [2024-07-10 15:45:41.240676] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe47f70 (9): Bad file descriptor 00:22:01.969 [2024-07-10 15:45:41.240696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda8a50 (9): Bad file descriptor 00:22:01.969 [2024-07-10 15:45:41.240723] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd32b0 (9): Bad file descriptor 00:22:01.969 [2024-07-10 15:45:41.240740] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:01.969 [2024-07-10 15:45:41.240754] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:01.969 [2024-07-10 15:45:41.240782] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:01.969 [2024-07-10 15:45:41.240873] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.969 [2024-07-10 15:45:41.240899] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.969 [2024-07-10 15:45:41.240919] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.969 [2024-07-10 15:45:41.240938] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.969 [2024-07-10 15:45:41.240957] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.969 [2024-07-10 15:45:41.241382] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.969 [2024-07-10 15:45:41.241713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.969 [2024-07-10 15:45:41.241867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.969 [2024-07-10 15:45:41.241894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1006490 with addr=10.0.0.2, port=4420 00:22:01.969 [2024-07-10 15:45:41.241914] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1006490 is same with the state(5) to be set 00:22:01.969 [2024-07-10 15:45:41.242038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.969 [2024-07-10 15:45:41.242174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.969 [2024-07-10 15:45:41.242200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe3ee60 with addr=10.0.0.2, port=4420 00:22:01.969 [2024-07-10 15:45:41.242216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe3ee60 is same with the state(5) to be set 00:22:01.969 [2024-07-10 15:45:41.242353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.969 [2024-07-10 15:45:41.242497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.969 [2024-07-10 15:45:41.242524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfd2e20 with addr=10.0.0.2, port=4420 00:22:01.969 [2024-07-10 15:45:41.242540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfd2e20 is same with the state(5) to be set 00:22:01.970 [2024-07-10 15:45:41.242555] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:22:01.970 [2024-07-10 15:45:41.242568] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:22:01.970 [2024-07-10 15:45:41.242581] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:22:01.970 [2024-07-10 15:45:41.242601] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:22:01.970 [2024-07-10 15:45:41.242617] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:22:01.970 [2024-07-10 15:45:41.242630] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:22:01.970 [2024-07-10 15:45:41.242648] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:22:01.970 [2024-07-10 15:45:41.242662] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:22:01.970 [2024-07-10 15:45:41.242675] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:22:01.970 [2024-07-10 15:45:41.242692] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:22:01.970 [2024-07-10 15:45:41.242714] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:22:01.970 [2024-07-10 15:45:41.242727] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:22:01.970 [2024-07-10 15:45:41.242768] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.970 [2024-07-10 15:45:41.242791] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.970 [2024-07-10 15:45:41.242815] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.970 [2024-07-10 15:45:41.242835] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.970 [2024-07-10 15:45:41.242853] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.970 [2024-07-10 15:45:41.242871] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:01.970 [2024-07-10 15:45:41.243491] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:01.970 [2024-07-10 15:45:41.243519] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:22:01.970 [2024-07-10 15:45:41.243560] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.970 [2024-07-10 15:45:41.243576] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.970 [2024-07-10 15:45:41.243589] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.970 [2024-07-10 15:45:41.243601] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.970 [2024-07-10 15:45:41.243638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1006490 (9): Bad file descriptor 00:22:01.970 [2024-07-10 15:45:41.243661] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe3ee60 (9): Bad file descriptor 00:22:01.970 [2024-07-10 15:45:41.243679] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfd2e20 (9): Bad file descriptor 00:22:01.970 [2024-07-10 15:45:41.243763] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:22:01.970 [2024-07-10 15:45:41.243914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.970 [2024-07-10 15:45:41.244097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.970 [2024-07-10 15:45:41.244123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe45210 with addr=10.0.0.2, port=4420 00:22:01.970 [2024-07-10 15:45:41.244140] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe45210 is same with the state(5) to be set 00:22:01.970 [2024-07-10 15:45:41.244271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.970 [2024-07-10 15:45:41.244396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.970 [2024-07-10 15:45:41.244435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x100ba60 with addr=10.0.0.2, port=4420 00:22:01.970 [2024-07-10 15:45:41.244453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x100ba60 is same with the state(5) to be set 00:22:01.970 [2024-07-10 15:45:41.244467] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:22:01.970 [2024-07-10 15:45:41.244480] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:22:01.970 [2024-07-10 15:45:41.244494] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:22:01.970 [2024-07-10 15:45:41.244512] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:22:01.970 [2024-07-10 15:45:41.244526] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:22:01.970 [2024-07-10 15:45:41.244539] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:22:01.970 [2024-07-10 15:45:41.244555] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:22:01.970 [2024-07-10 15:45:41.244574] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:22:01.970 [2024-07-10 15:45:41.244587] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:22:01.970 [2024-07-10 15:45:41.244642] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.970 [2024-07-10 15:45:41.244662] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.970 [2024-07-10 15:45:41.244674] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.970 [2024-07-10 15:45:41.244806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.970 [2024-07-10 15:45:41.244935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:01.970 [2024-07-10 15:45:41.244960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10068c0 with addr=10.0.0.2, port=4420 00:22:01.970 [2024-07-10 15:45:41.244975] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10068c0 is same with the state(5) to be set 00:22:01.970 [2024-07-10 15:45:41.244994] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe45210 (9): Bad file descriptor 00:22:01.970 [2024-07-10 15:45:41.245013] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x100ba60 (9): Bad file descriptor 00:22:01.970 [2024-07-10 15:45:41.245059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10068c0 (9): Bad file descriptor 00:22:01.970 [2024-07-10 15:45:41.245081] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:01.970 [2024-07-10 15:45:41.245095] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:22:01.970 [2024-07-10 15:45:41.245108] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:01.970 [2024-07-10 15:45:41.245124] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:01.970 [2024-07-10 15:45:41.245138] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:22:01.970 [2024-07-10 15:45:41.245152] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:01.970 [2024-07-10 15:45:41.245189] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.970 [2024-07-10 15:45:41.245217] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:01.970 [2024-07-10 15:45:41.245230] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:22:01.970 [2024-07-10 15:45:41.245242] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:22:01.970 [2024-07-10 15:45:41.245255] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:22:01.970 [2024-07-10 15:45:41.245292] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:02.621 15:45:41 -- target/shutdown.sh@135 -- # nvmfpid= 00:22:02.621 15:45:41 -- target/shutdown.sh@138 -- # sleep 1 00:22:03.585 15:45:42 -- target/shutdown.sh@141 -- # kill -9 2179639 00:22:03.585 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (2179639) - No such process 00:22:03.585 15:45:42 -- target/shutdown.sh@141 -- # true 00:22:03.585 15:45:42 -- target/shutdown.sh@143 -- # stoptarget 00:22:03.585 15:45:42 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:22:03.585 15:45:42 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:22:03.585 15:45:42 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:03.585 15:45:42 -- target/shutdown.sh@45 -- # nvmftestfini 00:22:03.585 15:45:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:03.585 15:45:42 -- nvmf/common.sh@116 -- # sync 00:22:03.585 15:45:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:03.585 15:45:42 -- nvmf/common.sh@119 -- # set +e 00:22:03.585 15:45:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:03.585 15:45:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:03.585 rmmod nvme_tcp 00:22:03.585 rmmod nvme_fabrics 00:22:03.585 rmmod nvme_keyring 00:22:03.585 15:45:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:03.585 15:45:42 -- nvmf/common.sh@123 -- # set -e 00:22:03.585 15:45:42 -- nvmf/common.sh@124 -- # return 0 00:22:03.585 15:45:42 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:22:03.585 15:45:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:03.585 15:45:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:03.585 15:45:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:03.585 15:45:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:03.585 15:45:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:03.585 15:45:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.585 15:45:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:03.585 15:45:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.486 15:45:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:05.486 00:22:05.486 real 0m8.009s 00:22:05.486 user 0m20.594s 00:22:05.486 sys 0m1.613s 00:22:05.486 15:45:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:05.486 15:45:44 -- common/autotest_common.sh@10 -- # set +x 00:22:05.486 ************************************ 00:22:05.486 END TEST nvmf_shutdown_tc3 00:22:05.486 ************************************ 00:22:05.486 15:45:44 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:22:05.486 00:22:05.486 real 0m28.879s 00:22:05.486 user 1m23.112s 00:22:05.486 sys 0m6.524s 00:22:05.486 15:45:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:05.486 15:45:44 -- common/autotest_common.sh@10 -- # set +x 00:22:05.486 ************************************ 00:22:05.486 END TEST nvmf_shutdown 00:22:05.486 ************************************ 00:22:05.486 15:45:44 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:22:05.486 15:45:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:05.486 15:45:44 -- common/autotest_common.sh@10 -- # set +x 00:22:05.745 15:45:44 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:22:05.745 15:45:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:05.745 15:45:44 -- common/autotest_common.sh@10 -- # set +x 00:22:05.745 15:45:44 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:22:05.745 15:45:44 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:05.745 15:45:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:05.745 15:45:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:05.745 15:45:44 -- common/autotest_common.sh@10 -- # set +x 00:22:05.745 ************************************ 00:22:05.745 START TEST nvmf_multicontroller 00:22:05.745 ************************************ 00:22:05.745 15:45:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:22:05.745 * Looking for test storage... 00:22:05.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:05.745 15:45:44 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:05.745 15:45:44 -- nvmf/common.sh@7 -- # uname -s 00:22:05.745 15:45:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:05.745 15:45:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:05.745 15:45:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:05.745 15:45:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:05.745 15:45:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:05.745 15:45:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:05.745 15:45:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:05.745 15:45:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:05.745 15:45:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:05.745 15:45:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:05.745 15:45:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.745 15:45:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:05.745 15:45:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:05.745 15:45:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:05.745 15:45:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:05.745 15:45:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:05.745 15:45:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:05.745 15:45:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:05.745 15:45:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:05.745 15:45:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.745 15:45:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.745 15:45:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.745 15:45:44 -- paths/export.sh@5 -- # export PATH 00:22:05.745 15:45:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:05.745 15:45:44 -- nvmf/common.sh@46 -- # : 0 00:22:05.745 15:45:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:05.745 15:45:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:05.745 15:45:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:05.745 15:45:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:05.745 15:45:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:05.745 15:45:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:05.745 15:45:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:05.745 15:45:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:05.745 15:45:44 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:05.745 15:45:44 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:05.745 15:45:44 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:22:05.745 15:45:44 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:22:05.745 15:45:44 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:05.745 15:45:44 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:22:05.745 15:45:44 -- host/multicontroller.sh@23 -- # nvmftestinit 00:22:05.745 15:45:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:05.745 15:45:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:05.745 15:45:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:05.745 15:45:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:05.745 15:45:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:05.745 15:45:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.745 15:45:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.745 15:45:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:05.745 15:45:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:05.745 15:45:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:05.745 15:45:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:05.745 15:45:44 -- common/autotest_common.sh@10 -- # set +x 00:22:07.647 15:45:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:07.647 15:45:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:07.647 15:45:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:07.647 15:45:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:07.647 15:45:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:07.647 15:45:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:07.647 15:45:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:07.647 15:45:46 -- nvmf/common.sh@294 -- # net_devs=() 00:22:07.647 15:45:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:07.647 15:45:46 -- nvmf/common.sh@295 -- # e810=() 00:22:07.647 15:45:46 -- nvmf/common.sh@295 -- # local -ga e810 00:22:07.647 15:45:46 -- nvmf/common.sh@296 -- # x722=() 00:22:07.647 15:45:46 -- nvmf/common.sh@296 -- # local -ga x722 00:22:07.647 15:45:46 -- nvmf/common.sh@297 -- # mlx=() 00:22:07.647 15:45:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:07.647 15:45:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.647 15:45:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.647 15:45:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.647 15:45:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.647 15:45:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.647 15:45:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.647 15:45:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.647 15:45:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.647 15:45:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.647 15:45:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.647 15:45:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.647 15:45:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:07.647 15:45:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:07.647 15:45:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:07.647 15:45:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:07.647 15:45:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:07.647 15:45:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:07.647 15:45:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:07.647 15:45:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:07.647 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:07.647 15:45:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:07.647 15:45:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:07.647 15:45:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.647 15:45:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.647 15:45:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:07.647 15:45:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:07.647 15:45:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:07.647 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:07.647 15:45:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:07.647 15:45:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:07.647 15:45:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.647 15:45:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.647 15:45:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:07.647 15:45:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:07.647 15:45:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:07.647 15:45:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:07.647 15:45:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:07.647 15:45:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.647 15:45:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:07.647 15:45:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.647 15:45:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:07.647 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:07.647 15:45:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.647 15:45:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:07.647 15:45:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.647 15:45:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:07.647 15:45:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.647 15:45:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:07.647 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:07.647 15:45:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.647 15:45:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:07.647 15:45:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:07.647 15:45:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:07.647 15:45:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:07.647 15:45:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:07.647 15:45:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.647 15:45:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.647 15:45:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.647 15:45:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:07.647 15:45:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.647 15:45:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.647 15:45:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:07.647 15:45:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.647 15:45:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.647 15:45:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:07.647 15:45:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:07.647 15:45:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.647 15:45:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:07.647 15:45:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:07.647 15:45:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:07.647 15:45:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:07.647 15:45:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:07.905 15:45:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:07.905 15:45:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:07.905 15:45:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:07.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:22:07.905 00:22:07.905 --- 10.0.0.2 ping statistics --- 00:22:07.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.905 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:22:07.905 15:45:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:07.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:22:07.905 00:22:07.906 --- 10.0.0.1 ping statistics --- 00:22:07.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.906 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:22:07.906 15:45:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.906 15:45:47 -- nvmf/common.sh@410 -- # return 0 00:22:07.906 15:45:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:07.906 15:45:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.906 15:45:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:07.906 15:45:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:07.906 15:45:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.906 15:45:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:07.906 15:45:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:07.906 15:45:47 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:22:07.906 15:45:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:07.906 15:45:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:07.906 15:45:47 -- common/autotest_common.sh@10 -- # set +x 00:22:07.906 15:45:47 -- nvmf/common.sh@469 -- # nvmfpid=2182066 00:22:07.906 15:45:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:07.906 15:45:47 -- nvmf/common.sh@470 -- # waitforlisten 2182066 00:22:07.906 15:45:47 -- common/autotest_common.sh@819 -- # '[' -z 2182066 ']' 00:22:07.906 15:45:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.906 15:45:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:07.906 15:45:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.906 15:45:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:07.906 15:45:47 -- common/autotest_common.sh@10 -- # set +x 00:22:07.906 [2024-07-10 15:45:47.126152] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:07.906 [2024-07-10 15:45:47.126225] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.906 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.906 [2024-07-10 15:45:47.191831] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:08.163 [2024-07-10 15:45:47.307134] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:08.163 [2024-07-10 15:45:47.307287] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:08.163 [2024-07-10 15:45:47.307307] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:08.163 [2024-07-10 15:45:47.307322] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:08.163 [2024-07-10 15:45:47.307417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:08.163 [2024-07-10 15:45:47.307569] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:08.163 [2024-07-10 15:45:47.307573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.096 15:45:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:09.096 15:45:48 -- common/autotest_common.sh@852 -- # return 0 00:22:09.096 15:45:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:09.096 15:45:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:09.096 15:45:48 -- common/autotest_common.sh@10 -- # set +x 00:22:09.096 15:45:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.096 15:45:48 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:09.096 15:45:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.096 15:45:48 -- common/autotest_common.sh@10 -- # set +x 00:22:09.096 [2024-07-10 15:45:48.154149] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.096 15:45:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.096 15:45:48 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:09.096 15:45:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.096 15:45:48 -- common/autotest_common.sh@10 -- # set +x 00:22:09.096 Malloc0 00:22:09.096 15:45:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.096 15:45:48 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:09.096 15:45:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.096 15:45:48 -- common/autotest_common.sh@10 -- # set +x 00:22:09.096 15:45:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.096 15:45:48 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:09.096 15:45:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.096 15:45:48 -- common/autotest_common.sh@10 -- # set +x 00:22:09.096 15:45:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.096 15:45:48 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:09.096 15:45:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.096 15:45:48 -- common/autotest_common.sh@10 -- # set +x 00:22:09.096 [2024-07-10 15:45:48.215266] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.097 15:45:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.097 15:45:48 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:09.097 15:45:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.097 15:45:48 -- common/autotest_common.sh@10 -- # set +x 00:22:09.097 [2024-07-10 15:45:48.223158] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:09.097 15:45:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.097 15:45:48 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:09.097 15:45:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.097 15:45:48 -- common/autotest_common.sh@10 -- # set +x 00:22:09.097 Malloc1 00:22:09.097 15:45:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.097 15:45:48 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:22:09.097 15:45:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.097 15:45:48 -- common/autotest_common.sh@10 -- # set +x 00:22:09.097 15:45:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.097 15:45:48 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:22:09.097 15:45:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.097 15:45:48 -- common/autotest_common.sh@10 -- # set +x 00:22:09.097 15:45:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.097 15:45:48 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:09.097 15:45:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.097 15:45:48 -- common/autotest_common.sh@10 -- # set +x 00:22:09.097 15:45:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.097 15:45:48 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:22:09.097 15:45:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.097 15:45:48 -- common/autotest_common.sh@10 -- # set +x 00:22:09.097 15:45:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.097 15:45:48 -- host/multicontroller.sh@44 -- # bdevperf_pid=2182229 00:22:09.097 15:45:48 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:22:09.097 15:45:48 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:09.097 15:45:48 -- host/multicontroller.sh@47 -- # waitforlisten 2182229 /var/tmp/bdevperf.sock 00:22:09.097 15:45:48 -- common/autotest_common.sh@819 -- # '[' -z 2182229 ']' 00:22:09.097 15:45:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:09.097 15:45:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:09.097 15:45:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:09.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:09.097 15:45:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:09.097 15:45:48 -- common/autotest_common.sh@10 -- # set +x 00:22:10.029 15:45:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:10.029 15:45:49 -- common/autotest_common.sh@852 -- # return 0 00:22:10.029 15:45:49 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:10.029 15:45:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.029 15:45:49 -- common/autotest_common.sh@10 -- # set +x 00:22:10.287 NVMe0n1 00:22:10.287 15:45:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:10.287 15:45:49 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:10.287 15:45:49 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:22:10.287 15:45:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.287 15:45:49 -- common/autotest_common.sh@10 -- # set +x 00:22:10.287 15:45:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:10.287 1 00:22:10.287 15:45:49 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:10.287 15:45:49 -- common/autotest_common.sh@640 -- # local es=0 00:22:10.287 15:45:49 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:10.287 15:45:49 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:10.287 15:45:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:10.287 15:45:49 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:10.287 15:45:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:10.287 15:45:49 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:22:10.287 15:45:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.287 15:45:49 -- common/autotest_common.sh@10 -- # set +x 00:22:10.287 request: 00:22:10.287 { 00:22:10.287 "name": "NVMe0", 00:22:10.287 "trtype": "tcp", 00:22:10.287 "traddr": "10.0.0.2", 00:22:10.287 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:22:10.287 "hostaddr": "10.0.0.2", 00:22:10.287 "hostsvcid": "60000", 00:22:10.287 "adrfam": "ipv4", 00:22:10.287 "trsvcid": "4420", 00:22:10.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.287 "method": "bdev_nvme_attach_controller", 00:22:10.287 "req_id": 1 00:22:10.287 } 00:22:10.287 Got JSON-RPC error response 00:22:10.287 response: 00:22:10.287 { 00:22:10.287 "code": -114, 00:22:10.287 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:10.287 } 00:22:10.287 15:45:49 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:10.287 15:45:49 -- common/autotest_common.sh@643 -- # es=1 00:22:10.287 15:45:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:10.287 15:45:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:10.287 15:45:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:10.287 15:45:49 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:10.287 15:45:49 -- common/autotest_common.sh@640 -- # local es=0 00:22:10.287 15:45:49 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:10.287 15:45:49 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:10.287 15:45:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:10.287 15:45:49 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:10.287 15:45:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:10.287 15:45:49 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:22:10.287 15:45:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.287 15:45:49 -- common/autotest_common.sh@10 -- # set +x 00:22:10.287 request: 00:22:10.287 { 00:22:10.287 "name": "NVMe0", 00:22:10.287 "trtype": "tcp", 00:22:10.287 "traddr": "10.0.0.2", 00:22:10.287 "hostaddr": "10.0.0.2", 00:22:10.287 "hostsvcid": "60000", 00:22:10.287 "adrfam": "ipv4", 00:22:10.287 "trsvcid": "4420", 00:22:10.287 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:10.287 "method": "bdev_nvme_attach_controller", 00:22:10.287 "req_id": 1 00:22:10.287 } 00:22:10.287 Got JSON-RPC error response 00:22:10.287 response: 00:22:10.287 { 00:22:10.287 "code": -114, 00:22:10.287 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:10.287 } 00:22:10.287 15:45:49 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:10.287 15:45:49 -- common/autotest_common.sh@643 -- # es=1 00:22:10.287 15:45:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:10.287 15:45:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:10.287 15:45:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:10.287 15:45:49 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:10.287 15:45:49 -- common/autotest_common.sh@640 -- # local es=0 00:22:10.287 15:45:49 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:10.287 15:45:49 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:10.287 15:45:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:10.287 15:45:49 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:10.287 15:45:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:10.287 15:45:49 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:22:10.287 15:45:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.287 15:45:49 -- common/autotest_common.sh@10 -- # set +x 00:22:10.287 request: 00:22:10.287 { 00:22:10.287 "name": "NVMe0", 00:22:10.287 "trtype": "tcp", 00:22:10.287 "traddr": "10.0.0.2", 00:22:10.287 "hostaddr": "10.0.0.2", 00:22:10.287 "hostsvcid": "60000", 00:22:10.287 "adrfam": "ipv4", 00:22:10.287 "trsvcid": "4420", 00:22:10.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.287 "multipath": "disable", 00:22:10.287 "method": "bdev_nvme_attach_controller", 00:22:10.287 "req_id": 1 00:22:10.287 } 00:22:10.287 Got JSON-RPC error response 00:22:10.287 response: 00:22:10.287 { 00:22:10.287 "code": -114, 00:22:10.287 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:22:10.287 } 00:22:10.287 15:45:49 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:10.287 15:45:49 -- common/autotest_common.sh@643 -- # es=1 00:22:10.287 15:45:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:10.287 15:45:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:10.287 15:45:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:10.287 15:45:49 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:10.287 15:45:49 -- common/autotest_common.sh@640 -- # local es=0 00:22:10.287 15:45:49 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:10.287 15:45:49 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:22:10.287 15:45:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:10.287 15:45:49 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:22:10.287 15:45:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:22:10.287 15:45:49 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:22:10.287 15:45:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.287 15:45:49 -- common/autotest_common.sh@10 -- # set +x 00:22:10.287 request: 00:22:10.287 { 00:22:10.287 "name": "NVMe0", 00:22:10.287 "trtype": "tcp", 00:22:10.287 "traddr": "10.0.0.2", 00:22:10.287 "hostaddr": "10.0.0.2", 00:22:10.287 "hostsvcid": "60000", 00:22:10.287 "adrfam": "ipv4", 00:22:10.287 "trsvcid": "4420", 00:22:10.287 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:10.287 "multipath": "failover", 00:22:10.287 "method": "bdev_nvme_attach_controller", 00:22:10.287 "req_id": 1 00:22:10.287 } 00:22:10.287 Got JSON-RPC error response 00:22:10.287 response: 00:22:10.287 { 00:22:10.287 "code": -114, 00:22:10.287 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:22:10.287 } 00:22:10.287 15:45:49 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:22:10.287 15:45:49 -- common/autotest_common.sh@643 -- # es=1 00:22:10.287 15:45:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:22:10.287 15:45:49 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:22:10.287 15:45:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:22:10.287 15:45:49 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:10.287 15:45:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.287 15:45:49 -- common/autotest_common.sh@10 -- # set +x 00:22:10.287 00:22:10.287 15:45:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:10.287 15:45:49 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:10.287 15:45:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.287 15:45:49 -- common/autotest_common.sh@10 -- # set +x 00:22:10.287 15:45:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:10.287 15:45:49 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:22:10.287 15:45:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.287 15:45:49 -- common/autotest_common.sh@10 -- # set +x 00:22:10.545 00:22:10.545 15:45:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:10.545 15:45:49 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:10.545 15:45:49 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:22:10.545 15:45:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.545 15:45:49 -- common/autotest_common.sh@10 -- # set +x 00:22:10.545 15:45:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:10.545 15:45:49 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:22:10.545 15:45:49 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:11.475 0 00:22:11.475 15:45:50 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:22:11.475 15:45:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:11.475 15:45:50 -- common/autotest_common.sh@10 -- # set +x 00:22:11.475 15:45:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:11.475 15:45:50 -- host/multicontroller.sh@100 -- # killprocess 2182229 00:22:11.475 15:45:50 -- common/autotest_common.sh@926 -- # '[' -z 2182229 ']' 00:22:11.475 15:45:50 -- common/autotest_common.sh@930 -- # kill -0 2182229 00:22:11.475 15:45:50 -- common/autotest_common.sh@931 -- # uname 00:22:11.475 15:45:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:11.475 15:45:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2182229 00:22:11.732 15:45:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:11.732 15:45:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:11.732 15:45:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2182229' 00:22:11.732 killing process with pid 2182229 00:22:11.732 15:45:50 -- common/autotest_common.sh@945 -- # kill 2182229 00:22:11.732 15:45:50 -- common/autotest_common.sh@950 -- # wait 2182229 00:22:11.990 15:45:51 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:11.990 15:45:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:11.990 15:45:51 -- common/autotest_common.sh@10 -- # set +x 00:22:11.990 15:45:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:11.990 15:45:51 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:11.990 15:45:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:11.990 15:45:51 -- common/autotest_common.sh@10 -- # set +x 00:22:11.990 15:45:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:11.990 15:45:51 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:22:11.990 15:45:51 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:11.990 15:45:51 -- common/autotest_common.sh@1597 -- # read -r file 00:22:11.990 15:45:51 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:22:11.990 15:45:51 -- common/autotest_common.sh@1596 -- # sort -u 00:22:11.990 15:45:51 -- common/autotest_common.sh@1598 -- # cat 00:22:11.990 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:11.990 [2024-07-10 15:45:48.317067] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:11.990 [2024-07-10 15:45:48.317167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2182229 ] 00:22:11.990 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.990 [2024-07-10 15:45:48.377251] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.990 [2024-07-10 15:45:48.484980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.990 [2024-07-10 15:45:49.693146] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 9d4d889d-901c-4a60-9036-dc5f24dbffbc already exists 00:22:11.990 [2024-07-10 15:45:49.693187] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:9d4d889d-901c-4a60-9036-dc5f24dbffbc alias for bdev NVMe1n1 00:22:11.990 [2024-07-10 15:45:49.693204] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:22:11.990 Running I/O for 1 seconds... 00:22:11.990 00:22:11.990 Latency(us) 00:22:11.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.990 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:22:11.990 NVMe0n1 : 1.01 19769.78 77.23 0.00 0.00 6457.96 3810.80 14854.83 00:22:11.990 =================================================================================================================== 00:22:11.990 Total : 19769.78 77.23 0.00 0.00 6457.96 3810.80 14854.83 00:22:11.990 Received shutdown signal, test time was about 1.000000 seconds 00:22:11.990 00:22:11.990 Latency(us) 00:22:11.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:11.990 =================================================================================================================== 00:22:11.990 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:11.990 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:22:11.990 15:45:51 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:11.990 15:45:51 -- common/autotest_common.sh@1597 -- # read -r file 00:22:11.990 15:45:51 -- host/multicontroller.sh@108 -- # nvmftestfini 00:22:11.990 15:45:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:11.990 15:45:51 -- nvmf/common.sh@116 -- # sync 00:22:11.990 15:45:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:11.990 15:45:51 -- nvmf/common.sh@119 -- # set +e 00:22:11.990 15:45:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:11.990 15:45:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:11.990 rmmod nvme_tcp 00:22:11.990 rmmod nvme_fabrics 00:22:11.990 rmmod nvme_keyring 00:22:11.990 15:45:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:11.990 15:45:51 -- nvmf/common.sh@123 -- # set -e 00:22:11.990 15:45:51 -- nvmf/common.sh@124 -- # return 0 00:22:11.990 15:45:51 -- nvmf/common.sh@477 -- # '[' -n 2182066 ']' 00:22:11.990 15:45:51 -- nvmf/common.sh@478 -- # killprocess 2182066 00:22:11.990 15:45:51 -- common/autotest_common.sh@926 -- # '[' -z 2182066 ']' 00:22:11.990 15:45:51 -- common/autotest_common.sh@930 -- # kill -0 2182066 00:22:11.990 15:45:51 -- common/autotest_common.sh@931 -- # uname 00:22:11.990 15:45:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:11.990 15:45:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2182066 00:22:11.990 15:45:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:11.990 15:45:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:11.990 15:45:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2182066' 00:22:11.990 killing process with pid 2182066 00:22:11.990 15:45:51 -- common/autotest_common.sh@945 -- # kill 2182066 00:22:11.990 15:45:51 -- common/autotest_common.sh@950 -- # wait 2182066 00:22:12.248 15:45:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:12.248 15:45:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:12.248 15:45:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:12.248 15:45:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:12.248 15:45:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:12.248 15:45:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:12.248 15:45:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:12.248 15:45:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.774 15:45:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:14.774 00:22:14.774 real 0m8.720s 00:22:14.774 user 0m16.653s 00:22:14.774 sys 0m2.299s 00:22:14.774 15:45:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:14.774 15:45:53 -- common/autotest_common.sh@10 -- # set +x 00:22:14.774 ************************************ 00:22:14.774 END TEST nvmf_multicontroller 00:22:14.774 ************************************ 00:22:14.774 15:45:53 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:14.774 15:45:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:14.774 15:45:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:14.774 15:45:53 -- common/autotest_common.sh@10 -- # set +x 00:22:14.774 ************************************ 00:22:14.774 START TEST nvmf_aer 00:22:14.774 ************************************ 00:22:14.774 15:45:53 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:14.774 * Looking for test storage... 00:22:14.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:14.774 15:45:53 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:14.774 15:45:53 -- nvmf/common.sh@7 -- # uname -s 00:22:14.775 15:45:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:14.775 15:45:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:14.775 15:45:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:14.775 15:45:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:14.775 15:45:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:14.775 15:45:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:14.775 15:45:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:14.775 15:45:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:14.775 15:45:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:14.775 15:45:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:14.775 15:45:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.775 15:45:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.775 15:45:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:14.775 15:45:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:14.775 15:45:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:14.775 15:45:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:14.775 15:45:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.775 15:45:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.775 15:45:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.775 15:45:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.775 15:45:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.775 15:45:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.775 15:45:53 -- paths/export.sh@5 -- # export PATH 00:22:14.775 15:45:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.775 15:45:53 -- nvmf/common.sh@46 -- # : 0 00:22:14.775 15:45:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:14.775 15:45:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:14.775 15:45:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:14.775 15:45:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:14.775 15:45:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:14.775 15:45:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:14.775 15:45:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:14.775 15:45:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:14.775 15:45:53 -- host/aer.sh@11 -- # nvmftestinit 00:22:14.775 15:45:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:14.775 15:45:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.775 15:45:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:14.775 15:45:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:14.775 15:45:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:14.775 15:45:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.775 15:45:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.775 15:45:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.775 15:45:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:14.775 15:45:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:14.775 15:45:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:14.775 15:45:53 -- common/autotest_common.sh@10 -- # set +x 00:22:16.671 15:45:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:16.671 15:45:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:16.671 15:45:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:16.671 15:45:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:16.671 15:45:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:16.671 15:45:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:16.671 15:45:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:16.671 15:45:55 -- nvmf/common.sh@294 -- # net_devs=() 00:22:16.671 15:45:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:16.671 15:45:55 -- nvmf/common.sh@295 -- # e810=() 00:22:16.671 15:45:55 -- nvmf/common.sh@295 -- # local -ga e810 00:22:16.671 15:45:55 -- nvmf/common.sh@296 -- # x722=() 00:22:16.671 15:45:55 -- nvmf/common.sh@296 -- # local -ga x722 00:22:16.671 15:45:55 -- nvmf/common.sh@297 -- # mlx=() 00:22:16.671 15:45:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:16.671 15:45:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:16.671 15:45:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:16.671 15:45:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:16.671 15:45:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:16.671 15:45:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:16.671 15:45:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:16.671 15:45:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:16.671 15:45:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:16.671 15:45:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:16.671 15:45:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:16.671 15:45:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:16.671 15:45:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:16.671 15:45:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:16.671 15:45:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:16.671 15:45:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:16.671 15:45:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:16.671 15:45:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:16.671 15:45:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:16.671 15:45:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:16.671 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:16.671 15:45:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:16.671 15:45:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:16.671 15:45:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.671 15:45:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.671 15:45:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:16.671 15:45:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:16.671 15:45:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:16.671 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:16.671 15:45:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:16.671 15:45:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:16.671 15:45:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.671 15:45:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.671 15:45:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:16.671 15:45:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:16.671 15:45:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:16.671 15:45:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:16.671 15:45:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:16.671 15:45:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.671 15:45:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:16.671 15:45:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.671 15:45:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:16.671 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:16.671 15:45:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.671 15:45:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:16.671 15:45:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.671 15:45:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:16.671 15:45:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.671 15:45:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:16.671 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:16.671 15:45:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.671 15:45:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:16.671 15:45:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:16.671 15:45:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:16.671 15:45:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:16.671 15:45:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:16.671 15:45:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.671 15:45:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.671 15:45:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:16.671 15:45:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:16.671 15:45:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:16.671 15:45:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:16.671 15:45:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:16.671 15:45:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:16.671 15:45:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.671 15:45:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:16.671 15:45:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:16.671 15:45:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:16.671 15:45:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:16.671 15:45:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:16.671 15:45:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:16.671 15:45:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:16.671 15:45:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:16.671 15:45:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:16.671 15:45:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:16.671 15:45:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:16.671 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.671 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:22:16.671 00:22:16.671 --- 10.0.0.2 ping statistics --- 00:22:16.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.671 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:22:16.671 15:45:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:16.671 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.671 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:22:16.671 00:22:16.671 --- 10.0.0.1 ping statistics --- 00:22:16.671 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.671 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:22:16.671 15:45:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.671 15:45:55 -- nvmf/common.sh@410 -- # return 0 00:22:16.671 15:45:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:16.671 15:45:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.671 15:45:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:16.671 15:45:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:16.671 15:45:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.671 15:45:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:16.671 15:45:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:16.671 15:45:55 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:16.671 15:45:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:16.671 15:45:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:16.671 15:45:55 -- common/autotest_common.sh@10 -- # set +x 00:22:16.671 15:45:55 -- nvmf/common.sh@469 -- # nvmfpid=2184584 00:22:16.671 15:45:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:16.671 15:45:55 -- nvmf/common.sh@470 -- # waitforlisten 2184584 00:22:16.671 15:45:55 -- common/autotest_common.sh@819 -- # '[' -z 2184584 ']' 00:22:16.671 15:45:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.671 15:45:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:16.671 15:45:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.672 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.672 15:45:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:16.672 15:45:55 -- common/autotest_common.sh@10 -- # set +x 00:22:16.672 [2024-07-10 15:45:55.794284] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:16.672 [2024-07-10 15:45:55.794366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.672 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.672 [2024-07-10 15:45:55.861740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:16.672 [2024-07-10 15:45:55.976907] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:16.672 [2024-07-10 15:45:55.977073] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.672 [2024-07-10 15:45:55.977092] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.672 [2024-07-10 15:45:55.977105] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.672 [2024-07-10 15:45:55.977163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.672 [2024-07-10 15:45:55.977188] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:16.672 [2024-07-10 15:45:55.977306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:16.672 [2024-07-10 15:45:55.977308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.602 15:45:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:17.602 15:45:56 -- common/autotest_common.sh@852 -- # return 0 00:22:17.602 15:45:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:17.602 15:45:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:17.602 15:45:56 -- common/autotest_common.sh@10 -- # set +x 00:22:17.602 15:45:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.602 15:45:56 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:17.602 15:45:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.602 15:45:56 -- common/autotest_common.sh@10 -- # set +x 00:22:17.602 [2024-07-10 15:45:56.743829] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.602 15:45:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.602 15:45:56 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:17.602 15:45:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.602 15:45:56 -- common/autotest_common.sh@10 -- # set +x 00:22:17.602 Malloc0 00:22:17.602 15:45:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.602 15:45:56 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:17.602 15:45:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.602 15:45:56 -- common/autotest_common.sh@10 -- # set +x 00:22:17.602 15:45:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.602 15:45:56 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:17.602 15:45:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.602 15:45:56 -- common/autotest_common.sh@10 -- # set +x 00:22:17.602 15:45:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.602 15:45:56 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:17.602 15:45:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.602 15:45:56 -- common/autotest_common.sh@10 -- # set +x 00:22:17.602 [2024-07-10 15:45:56.794937] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.602 15:45:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.602 15:45:56 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:17.602 15:45:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.602 15:45:56 -- common/autotest_common.sh@10 -- # set +x 00:22:17.602 [2024-07-10 15:45:56.802670] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:17.602 [ 00:22:17.602 { 00:22:17.602 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:17.602 "subtype": "Discovery", 00:22:17.602 "listen_addresses": [], 00:22:17.602 "allow_any_host": true, 00:22:17.602 "hosts": [] 00:22:17.602 }, 00:22:17.602 { 00:22:17.602 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.602 "subtype": "NVMe", 00:22:17.602 "listen_addresses": [ 00:22:17.602 { 00:22:17.602 "transport": "TCP", 00:22:17.602 "trtype": "TCP", 00:22:17.602 "adrfam": "IPv4", 00:22:17.602 "traddr": "10.0.0.2", 00:22:17.602 "trsvcid": "4420" 00:22:17.602 } 00:22:17.602 ], 00:22:17.602 "allow_any_host": true, 00:22:17.602 "hosts": [], 00:22:17.602 "serial_number": "SPDK00000000000001", 00:22:17.602 "model_number": "SPDK bdev Controller", 00:22:17.602 "max_namespaces": 2, 00:22:17.602 "min_cntlid": 1, 00:22:17.602 "max_cntlid": 65519, 00:22:17.602 "namespaces": [ 00:22:17.602 { 00:22:17.602 "nsid": 1, 00:22:17.602 "bdev_name": "Malloc0", 00:22:17.602 "name": "Malloc0", 00:22:17.602 "nguid": "F424622D170E4A0883ABAD02C9F133A7", 00:22:17.603 "uuid": "f424622d-170e-4a08-83ab-ad02c9f133a7" 00:22:17.603 } 00:22:17.603 ] 00:22:17.603 } 00:22:17.603 ] 00:22:17.603 15:45:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.603 15:45:56 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:17.603 15:45:56 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:17.603 15:45:56 -- host/aer.sh@33 -- # aerpid=2184743 00:22:17.603 15:45:56 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:17.603 15:45:56 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:17.603 15:45:56 -- common/autotest_common.sh@1244 -- # local i=0 00:22:17.603 15:45:56 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:17.603 15:45:56 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:22:17.603 15:45:56 -- common/autotest_common.sh@1247 -- # i=1 00:22:17.603 15:45:56 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:22:17.603 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.603 15:45:56 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:17.603 15:45:56 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:22:17.603 15:45:56 -- common/autotest_common.sh@1247 -- # i=2 00:22:17.603 15:45:56 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:22:17.859 15:45:57 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:17.859 15:45:57 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:17.859 15:45:57 -- common/autotest_common.sh@1255 -- # return 0 00:22:17.859 15:45:57 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:17.859 15:45:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.859 15:45:57 -- common/autotest_common.sh@10 -- # set +x 00:22:17.859 Malloc1 00:22:17.859 15:45:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.859 15:45:57 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:17.859 15:45:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.859 15:45:57 -- common/autotest_common.sh@10 -- # set +x 00:22:17.859 15:45:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.859 15:45:57 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:17.859 15:45:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.859 15:45:57 -- common/autotest_common.sh@10 -- # set +x 00:22:17.859 Asynchronous Event Request test 00:22:17.859 Attaching to 10.0.0.2 00:22:17.859 Attached to 10.0.0.2 00:22:17.859 Registering asynchronous event callbacks... 00:22:17.859 Starting namespace attribute notice tests for all controllers... 00:22:17.859 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:17.859 aer_cb - Changed Namespace 00:22:17.859 Cleaning up... 00:22:17.859 [ 00:22:17.859 { 00:22:17.859 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:17.859 "subtype": "Discovery", 00:22:17.859 "listen_addresses": [], 00:22:17.859 "allow_any_host": true, 00:22:17.859 "hosts": [] 00:22:17.859 }, 00:22:17.859 { 00:22:17.859 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.859 "subtype": "NVMe", 00:22:17.859 "listen_addresses": [ 00:22:17.859 { 00:22:17.859 "transport": "TCP", 00:22:17.859 "trtype": "TCP", 00:22:17.859 "adrfam": "IPv4", 00:22:17.859 "traddr": "10.0.0.2", 00:22:17.859 "trsvcid": "4420" 00:22:17.859 } 00:22:17.859 ], 00:22:17.859 "allow_any_host": true, 00:22:17.859 "hosts": [], 00:22:17.859 "serial_number": "SPDK00000000000001", 00:22:17.859 "model_number": "SPDK bdev Controller", 00:22:17.859 "max_namespaces": 2, 00:22:17.859 "min_cntlid": 1, 00:22:17.859 "max_cntlid": 65519, 00:22:17.859 "namespaces": [ 00:22:17.859 { 00:22:17.859 "nsid": 1, 00:22:17.859 "bdev_name": "Malloc0", 00:22:17.859 "name": "Malloc0", 00:22:17.859 "nguid": "F424622D170E4A0883ABAD02C9F133A7", 00:22:17.859 "uuid": "f424622d-170e-4a08-83ab-ad02c9f133a7" 00:22:17.859 }, 00:22:17.859 { 00:22:17.859 "nsid": 2, 00:22:17.859 "bdev_name": "Malloc1", 00:22:17.859 "name": "Malloc1", 00:22:17.859 "nguid": "518E3D7E345E46FBA557AEDB4C48A81C", 00:22:17.859 "uuid": "518e3d7e-345e-46fb-a557-aedb4c48a81c" 00:22:17.859 } 00:22:17.859 ] 00:22:17.859 } 00:22:17.859 ] 00:22:17.859 15:45:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.859 15:45:57 -- host/aer.sh@43 -- # wait 2184743 00:22:17.859 15:45:57 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:17.859 15:45:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.859 15:45:57 -- common/autotest_common.sh@10 -- # set +x 00:22:17.859 15:45:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.859 15:45:57 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:17.859 15:45:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.859 15:45:57 -- common/autotest_common.sh@10 -- # set +x 00:22:17.859 15:45:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.859 15:45:57 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:17.859 15:45:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:17.859 15:45:57 -- common/autotest_common.sh@10 -- # set +x 00:22:17.859 15:45:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:17.859 15:45:57 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:17.859 15:45:57 -- host/aer.sh@51 -- # nvmftestfini 00:22:17.860 15:45:57 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:17.860 15:45:57 -- nvmf/common.sh@116 -- # sync 00:22:17.860 15:45:57 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:17.860 15:45:57 -- nvmf/common.sh@119 -- # set +e 00:22:17.860 15:45:57 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:17.860 15:45:57 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:17.860 rmmod nvme_tcp 00:22:17.860 rmmod nvme_fabrics 00:22:17.860 rmmod nvme_keyring 00:22:17.860 15:45:57 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:17.860 15:45:57 -- nvmf/common.sh@123 -- # set -e 00:22:17.860 15:45:57 -- nvmf/common.sh@124 -- # return 0 00:22:17.860 15:45:57 -- nvmf/common.sh@477 -- # '[' -n 2184584 ']' 00:22:17.860 15:45:57 -- nvmf/common.sh@478 -- # killprocess 2184584 00:22:17.860 15:45:57 -- common/autotest_common.sh@926 -- # '[' -z 2184584 ']' 00:22:17.860 15:45:57 -- common/autotest_common.sh@930 -- # kill -0 2184584 00:22:17.860 15:45:57 -- common/autotest_common.sh@931 -- # uname 00:22:17.860 15:45:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:17.860 15:45:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2184584 00:22:18.117 15:45:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:18.117 15:45:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:18.117 15:45:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2184584' 00:22:18.117 killing process with pid 2184584 00:22:18.117 15:45:57 -- common/autotest_common.sh@945 -- # kill 2184584 00:22:18.117 [2024-07-10 15:45:57.237206] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:18.117 15:45:57 -- common/autotest_common.sh@950 -- # wait 2184584 00:22:18.375 15:45:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:18.375 15:45:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:18.375 15:45:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:18.375 15:45:57 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:18.375 15:45:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:18.375 15:45:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.375 15:45:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:18.375 15:45:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.275 15:45:59 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:20.275 00:22:20.275 real 0m5.919s 00:22:20.275 user 0m6.776s 00:22:20.275 sys 0m1.842s 00:22:20.275 15:45:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:20.275 15:45:59 -- common/autotest_common.sh@10 -- # set +x 00:22:20.275 ************************************ 00:22:20.275 END TEST nvmf_aer 00:22:20.275 ************************************ 00:22:20.275 15:45:59 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:20.275 15:45:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:20.275 15:45:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:20.275 15:45:59 -- common/autotest_common.sh@10 -- # set +x 00:22:20.275 ************************************ 00:22:20.275 START TEST nvmf_async_init 00:22:20.275 ************************************ 00:22:20.275 15:45:59 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:20.275 * Looking for test storage... 00:22:20.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:20.275 15:45:59 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.275 15:45:59 -- nvmf/common.sh@7 -- # uname -s 00:22:20.275 15:45:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.275 15:45:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.275 15:45:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.275 15:45:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.275 15:45:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.275 15:45:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.275 15:45:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.275 15:45:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.275 15:45:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.275 15:45:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.275 15:45:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.275 15:45:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.275 15:45:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.275 15:45:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.275 15:45:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.275 15:45:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:20.275 15:45:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.275 15:45:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.275 15:45:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.275 15:45:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.275 15:45:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.275 15:45:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.275 15:45:59 -- paths/export.sh@5 -- # export PATH 00:22:20.276 15:45:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.276 15:45:59 -- nvmf/common.sh@46 -- # : 0 00:22:20.276 15:45:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:20.276 15:45:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:20.276 15:45:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:20.276 15:45:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.276 15:45:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.276 15:45:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:20.276 15:45:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:20.276 15:45:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:20.276 15:45:59 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:20.276 15:45:59 -- host/async_init.sh@14 -- # null_block_size=512 00:22:20.276 15:45:59 -- host/async_init.sh@15 -- # null_bdev=null0 00:22:20.276 15:45:59 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:20.276 15:45:59 -- host/async_init.sh@20 -- # uuidgen 00:22:20.276 15:45:59 -- host/async_init.sh@20 -- # tr -d - 00:22:20.276 15:45:59 -- host/async_init.sh@20 -- # nguid=e0cd781334154789a1bd5d0a584ac268 00:22:20.276 15:45:59 -- host/async_init.sh@22 -- # nvmftestinit 00:22:20.276 15:45:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:20.276 15:45:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.276 15:45:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:20.276 15:45:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:20.276 15:45:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:20.276 15:45:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.276 15:45:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.276 15:45:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.533 15:45:59 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:20.533 15:45:59 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:20.533 15:45:59 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:20.533 15:45:59 -- common/autotest_common.sh@10 -- # set +x 00:22:22.435 15:46:01 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:22.435 15:46:01 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:22.435 15:46:01 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:22.435 15:46:01 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:22.435 15:46:01 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:22.435 15:46:01 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:22.435 15:46:01 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:22.435 15:46:01 -- nvmf/common.sh@294 -- # net_devs=() 00:22:22.435 15:46:01 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:22.435 15:46:01 -- nvmf/common.sh@295 -- # e810=() 00:22:22.435 15:46:01 -- nvmf/common.sh@295 -- # local -ga e810 00:22:22.435 15:46:01 -- nvmf/common.sh@296 -- # x722=() 00:22:22.435 15:46:01 -- nvmf/common.sh@296 -- # local -ga x722 00:22:22.435 15:46:01 -- nvmf/common.sh@297 -- # mlx=() 00:22:22.435 15:46:01 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:22.435 15:46:01 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:22.435 15:46:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:22.435 15:46:01 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:22.435 15:46:01 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:22.435 15:46:01 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:22.435 15:46:01 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:22.435 15:46:01 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:22.435 15:46:01 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:22.435 15:46:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:22.435 15:46:01 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:22.435 15:46:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:22.435 15:46:01 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:22.435 15:46:01 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:22.435 15:46:01 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:22.435 15:46:01 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:22.435 15:46:01 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:22.435 15:46:01 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:22.435 15:46:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:22.435 15:46:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:22.435 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:22.435 15:46:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:22.435 15:46:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:22.435 15:46:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.435 15:46:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.435 15:46:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:22.435 15:46:01 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:22.435 15:46:01 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:22.435 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:22.435 15:46:01 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:22.435 15:46:01 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:22.435 15:46:01 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:22.435 15:46:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:22.435 15:46:01 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:22.435 15:46:01 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:22.435 15:46:01 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:22.435 15:46:01 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:22.435 15:46:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:22.435 15:46:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.435 15:46:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:22.435 15:46:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.435 15:46:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:22.435 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:22.435 15:46:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.435 15:46:01 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:22.435 15:46:01 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:22.435 15:46:01 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:22.435 15:46:01 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:22.435 15:46:01 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:22.435 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:22.435 15:46:01 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:22.435 15:46:01 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:22.435 15:46:01 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:22.435 15:46:01 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:22.435 15:46:01 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:22.435 15:46:01 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:22.435 15:46:01 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:22.435 15:46:01 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:22.435 15:46:01 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:22.435 15:46:01 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:22.435 15:46:01 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:22.435 15:46:01 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:22.435 15:46:01 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:22.435 15:46:01 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:22.435 15:46:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:22.435 15:46:01 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:22.435 15:46:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:22.435 15:46:01 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:22.435 15:46:01 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:22.435 15:46:01 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:22.435 15:46:01 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:22.435 15:46:01 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:22.435 15:46:01 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:22.435 15:46:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:22.435 15:46:01 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:22.435 15:46:01 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:22.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:22.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:22:22.435 00:22:22.435 --- 10.0.0.2 ping statistics --- 00:22:22.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.435 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:22:22.435 15:46:01 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:22.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:22.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:22:22.435 00:22:22.435 --- 10.0.0.1 ping statistics --- 00:22:22.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:22.435 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:22.435 15:46:01 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:22.435 15:46:01 -- nvmf/common.sh@410 -- # return 0 00:22:22.435 15:46:01 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:22.435 15:46:01 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:22.435 15:46:01 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:22.435 15:46:01 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:22.435 15:46:01 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:22.435 15:46:01 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:22.435 15:46:01 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:22.435 15:46:01 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:22.435 15:46:01 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:22.435 15:46:01 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:22.435 15:46:01 -- common/autotest_common.sh@10 -- # set +x 00:22:22.435 15:46:01 -- nvmf/common.sh@469 -- # nvmfpid=2186692 00:22:22.435 15:46:01 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:22.435 15:46:01 -- nvmf/common.sh@470 -- # waitforlisten 2186692 00:22:22.435 15:46:01 -- common/autotest_common.sh@819 -- # '[' -z 2186692 ']' 00:22:22.435 15:46:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.435 15:46:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:22.435 15:46:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.435 15:46:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:22.435 15:46:01 -- common/autotest_common.sh@10 -- # set +x 00:22:22.435 [2024-07-10 15:46:01.782285] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:22.435 [2024-07-10 15:46:01.782371] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.693 EAL: No free 2048 kB hugepages reported on node 1 00:22:22.693 [2024-07-10 15:46:01.849922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.693 [2024-07-10 15:46:01.963832] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:22.693 [2024-07-10 15:46:01.963981] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.693 [2024-07-10 15:46:01.963998] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.693 [2024-07-10 15:46:01.964010] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.693 [2024-07-10 15:46:01.964052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.627 15:46:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:23.627 15:46:02 -- common/autotest_common.sh@852 -- # return 0 00:22:23.627 15:46:02 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:23.627 15:46:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:23.627 15:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:23.627 15:46:02 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.627 15:46:02 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:23.627 15:46:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.627 15:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:23.627 [2024-07-10 15:46:02.795803] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.627 15:46:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.627 15:46:02 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:23.627 15:46:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.627 15:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:23.627 null0 00:22:23.627 15:46:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.627 15:46:02 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:23.627 15:46:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.627 15:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:23.627 15:46:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.627 15:46:02 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:23.627 15:46:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.627 15:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:23.627 15:46:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.627 15:46:02 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e0cd781334154789a1bd5d0a584ac268 00:22:23.627 15:46:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.627 15:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:23.627 15:46:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.627 15:46:02 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:23.627 15:46:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.627 15:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:23.627 [2024-07-10 15:46:02.836052] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.627 15:46:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.627 15:46:02 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:23.627 15:46:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.627 15:46:02 -- common/autotest_common.sh@10 -- # set +x 00:22:23.885 nvme0n1 00:22:23.885 15:46:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.885 15:46:03 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:23.885 15:46:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.885 15:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:23.885 [ 00:22:23.885 { 00:22:23.885 "name": "nvme0n1", 00:22:23.885 "aliases": [ 00:22:23.885 "e0cd7813-3415-4789-a1bd-5d0a584ac268" 00:22:23.885 ], 00:22:23.885 "product_name": "NVMe disk", 00:22:23.885 "block_size": 512, 00:22:23.885 "num_blocks": 2097152, 00:22:23.885 "uuid": "e0cd7813-3415-4789-a1bd-5d0a584ac268", 00:22:23.885 "assigned_rate_limits": { 00:22:23.885 "rw_ios_per_sec": 0, 00:22:23.885 "rw_mbytes_per_sec": 0, 00:22:23.885 "r_mbytes_per_sec": 0, 00:22:23.885 "w_mbytes_per_sec": 0 00:22:23.885 }, 00:22:23.885 "claimed": false, 00:22:23.885 "zoned": false, 00:22:23.885 "supported_io_types": { 00:22:23.885 "read": true, 00:22:23.885 "write": true, 00:22:23.885 "unmap": false, 00:22:23.885 "write_zeroes": true, 00:22:23.885 "flush": true, 00:22:23.885 "reset": true, 00:22:23.885 "compare": true, 00:22:23.885 "compare_and_write": true, 00:22:23.885 "abort": true, 00:22:23.885 "nvme_admin": true, 00:22:23.885 "nvme_io": true 00:22:23.885 }, 00:22:23.885 "driver_specific": { 00:22:23.885 "nvme": [ 00:22:23.885 { 00:22:23.885 "trid": { 00:22:23.885 "trtype": "TCP", 00:22:23.885 "adrfam": "IPv4", 00:22:23.885 "traddr": "10.0.0.2", 00:22:23.885 "trsvcid": "4420", 00:22:23.885 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:23.885 }, 00:22:23.885 "ctrlr_data": { 00:22:23.885 "cntlid": 1, 00:22:23.885 "vendor_id": "0x8086", 00:22:23.885 "model_number": "SPDK bdev Controller", 00:22:23.885 "serial_number": "00000000000000000000", 00:22:23.885 "firmware_revision": "24.01.1", 00:22:23.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:23.885 "oacs": { 00:22:23.885 "security": 0, 00:22:23.885 "format": 0, 00:22:23.885 "firmware": 0, 00:22:23.885 "ns_manage": 0 00:22:23.885 }, 00:22:23.885 "multi_ctrlr": true, 00:22:23.885 "ana_reporting": false 00:22:23.885 }, 00:22:23.885 "vs": { 00:22:23.885 "nvme_version": "1.3" 00:22:23.885 }, 00:22:23.885 "ns_data": { 00:22:23.885 "id": 1, 00:22:23.885 "can_share": true 00:22:23.885 } 00:22:23.885 } 00:22:23.885 ], 00:22:23.885 "mp_policy": "active_passive" 00:22:23.885 } 00:22:23.885 } 00:22:23.885 ] 00:22:23.885 15:46:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.885 15:46:03 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:23.885 15:46:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.885 15:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:23.885 [2024-07-10 15:46:03.084622] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:23.885 [2024-07-10 15:46:03.084718] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x845a80 (9): Bad file descriptor 00:22:23.885 [2024-07-10 15:46:03.216576] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:23.885 15:46:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.885 15:46:03 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:23.885 15:46:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.885 15:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:23.885 [ 00:22:23.885 { 00:22:23.885 "name": "nvme0n1", 00:22:23.885 "aliases": [ 00:22:23.885 "e0cd7813-3415-4789-a1bd-5d0a584ac268" 00:22:23.885 ], 00:22:23.885 "product_name": "NVMe disk", 00:22:23.885 "block_size": 512, 00:22:23.885 "num_blocks": 2097152, 00:22:23.885 "uuid": "e0cd7813-3415-4789-a1bd-5d0a584ac268", 00:22:23.885 "assigned_rate_limits": { 00:22:23.885 "rw_ios_per_sec": 0, 00:22:23.885 "rw_mbytes_per_sec": 0, 00:22:23.885 "r_mbytes_per_sec": 0, 00:22:23.885 "w_mbytes_per_sec": 0 00:22:23.885 }, 00:22:23.885 "claimed": false, 00:22:23.885 "zoned": false, 00:22:23.885 "supported_io_types": { 00:22:23.885 "read": true, 00:22:23.885 "write": true, 00:22:23.885 "unmap": false, 00:22:23.885 "write_zeroes": true, 00:22:23.885 "flush": true, 00:22:23.885 "reset": true, 00:22:23.885 "compare": true, 00:22:23.885 "compare_and_write": true, 00:22:23.885 "abort": true, 00:22:23.885 "nvme_admin": true, 00:22:23.885 "nvme_io": true 00:22:23.885 }, 00:22:23.885 "driver_specific": { 00:22:23.885 "nvme": [ 00:22:23.885 { 00:22:23.885 "trid": { 00:22:23.885 "trtype": "TCP", 00:22:23.885 "adrfam": "IPv4", 00:22:23.885 "traddr": "10.0.0.2", 00:22:23.885 "trsvcid": "4420", 00:22:23.885 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:23.885 }, 00:22:23.885 "ctrlr_data": { 00:22:23.885 "cntlid": 2, 00:22:23.885 "vendor_id": "0x8086", 00:22:23.885 "model_number": "SPDK bdev Controller", 00:22:23.885 "serial_number": "00000000000000000000", 00:22:23.885 "firmware_revision": "24.01.1", 00:22:23.885 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:23.885 "oacs": { 00:22:23.885 "security": 0, 00:22:23.885 "format": 0, 00:22:23.885 "firmware": 0, 00:22:23.885 "ns_manage": 0 00:22:23.885 }, 00:22:23.885 "multi_ctrlr": true, 00:22:23.885 "ana_reporting": false 00:22:23.885 }, 00:22:23.885 "vs": { 00:22:23.885 "nvme_version": "1.3" 00:22:23.885 }, 00:22:23.885 "ns_data": { 00:22:23.885 "id": 1, 00:22:23.885 "can_share": true 00:22:23.885 } 00:22:23.885 } 00:22:23.885 ], 00:22:23.885 "mp_policy": "active_passive" 00:22:23.885 } 00:22:23.885 } 00:22:23.885 ] 00:22:23.885 15:46:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.885 15:46:03 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:23.885 15:46:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.885 15:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:23.885 15:46:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:23.885 15:46:03 -- host/async_init.sh@53 -- # mktemp 00:22:23.885 15:46:03 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.wnOtzlhK5m 00:22:23.885 15:46:03 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:23.885 15:46:03 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.wnOtzlhK5m 00:22:23.885 15:46:03 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:23.885 15:46:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:23.885 15:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:24.143 15:46:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.143 15:46:03 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:24.143 15:46:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.143 15:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:24.143 [2024-07-10 15:46:03.265295] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:24.143 [2024-07-10 15:46:03.265504] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:24.143 15:46:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.143 15:46:03 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wnOtzlhK5m 00:22:24.143 15:46:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.143 15:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:24.143 15:46:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.143 15:46:03 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.wnOtzlhK5m 00:22:24.143 15:46:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.143 15:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:24.143 [2024-07-10 15:46:03.281321] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:24.143 nvme0n1 00:22:24.143 15:46:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.143 15:46:03 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:24.143 15:46:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.143 15:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:24.143 [ 00:22:24.143 { 00:22:24.143 "name": "nvme0n1", 00:22:24.143 "aliases": [ 00:22:24.143 "e0cd7813-3415-4789-a1bd-5d0a584ac268" 00:22:24.143 ], 00:22:24.143 "product_name": "NVMe disk", 00:22:24.143 "block_size": 512, 00:22:24.143 "num_blocks": 2097152, 00:22:24.143 "uuid": "e0cd7813-3415-4789-a1bd-5d0a584ac268", 00:22:24.143 "assigned_rate_limits": { 00:22:24.143 "rw_ios_per_sec": 0, 00:22:24.143 "rw_mbytes_per_sec": 0, 00:22:24.143 "r_mbytes_per_sec": 0, 00:22:24.143 "w_mbytes_per_sec": 0 00:22:24.143 }, 00:22:24.143 "claimed": false, 00:22:24.143 "zoned": false, 00:22:24.143 "supported_io_types": { 00:22:24.143 "read": true, 00:22:24.143 "write": true, 00:22:24.143 "unmap": false, 00:22:24.143 "write_zeroes": true, 00:22:24.143 "flush": true, 00:22:24.143 "reset": true, 00:22:24.143 "compare": true, 00:22:24.143 "compare_and_write": true, 00:22:24.143 "abort": true, 00:22:24.143 "nvme_admin": true, 00:22:24.143 "nvme_io": true 00:22:24.143 }, 00:22:24.143 "driver_specific": { 00:22:24.143 "nvme": [ 00:22:24.143 { 00:22:24.143 "trid": { 00:22:24.143 "trtype": "TCP", 00:22:24.143 "adrfam": "IPv4", 00:22:24.143 "traddr": "10.0.0.2", 00:22:24.143 "trsvcid": "4421", 00:22:24.143 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:24.143 }, 00:22:24.143 "ctrlr_data": { 00:22:24.143 "cntlid": 3, 00:22:24.143 "vendor_id": "0x8086", 00:22:24.143 "model_number": "SPDK bdev Controller", 00:22:24.143 "serial_number": "00000000000000000000", 00:22:24.143 "firmware_revision": "24.01.1", 00:22:24.143 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:24.143 "oacs": { 00:22:24.143 "security": 0, 00:22:24.143 "format": 0, 00:22:24.143 "firmware": 0, 00:22:24.143 "ns_manage": 0 00:22:24.143 }, 00:22:24.143 "multi_ctrlr": true, 00:22:24.143 "ana_reporting": false 00:22:24.143 }, 00:22:24.143 "vs": { 00:22:24.143 "nvme_version": "1.3" 00:22:24.143 }, 00:22:24.143 "ns_data": { 00:22:24.143 "id": 1, 00:22:24.143 "can_share": true 00:22:24.143 } 00:22:24.143 } 00:22:24.143 ], 00:22:24.143 "mp_policy": "active_passive" 00:22:24.143 } 00:22:24.143 } 00:22:24.143 ] 00:22:24.143 15:46:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.143 15:46:03 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:24.143 15:46:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:24.143 15:46:03 -- common/autotest_common.sh@10 -- # set +x 00:22:24.143 15:46:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:24.143 15:46:03 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.wnOtzlhK5m 00:22:24.143 15:46:03 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:24.143 15:46:03 -- host/async_init.sh@78 -- # nvmftestfini 00:22:24.143 15:46:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:24.143 15:46:03 -- nvmf/common.sh@116 -- # sync 00:22:24.144 15:46:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:24.144 15:46:03 -- nvmf/common.sh@119 -- # set +e 00:22:24.144 15:46:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:24.144 15:46:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:24.144 rmmod nvme_tcp 00:22:24.144 rmmod nvme_fabrics 00:22:24.144 rmmod nvme_keyring 00:22:24.144 15:46:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:24.144 15:46:03 -- nvmf/common.sh@123 -- # set -e 00:22:24.144 15:46:03 -- nvmf/common.sh@124 -- # return 0 00:22:24.144 15:46:03 -- nvmf/common.sh@477 -- # '[' -n 2186692 ']' 00:22:24.144 15:46:03 -- nvmf/common.sh@478 -- # killprocess 2186692 00:22:24.144 15:46:03 -- common/autotest_common.sh@926 -- # '[' -z 2186692 ']' 00:22:24.144 15:46:03 -- common/autotest_common.sh@930 -- # kill -0 2186692 00:22:24.144 15:46:03 -- common/autotest_common.sh@931 -- # uname 00:22:24.144 15:46:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:24.144 15:46:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2186692 00:22:24.144 15:46:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:24.144 15:46:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:24.144 15:46:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2186692' 00:22:24.144 killing process with pid 2186692 00:22:24.144 15:46:03 -- common/autotest_common.sh@945 -- # kill 2186692 00:22:24.144 15:46:03 -- common/autotest_common.sh@950 -- # wait 2186692 00:22:24.401 15:46:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:24.401 15:46:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:24.401 15:46:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:24.401 15:46:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:24.401 15:46:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:24.401 15:46:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:24.401 15:46:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:24.401 15:46:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.935 15:46:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:26.935 00:22:26.935 real 0m6.187s 00:22:26.935 user 0m3.016s 00:22:26.935 sys 0m1.819s 00:22:26.935 15:46:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:26.935 15:46:05 -- common/autotest_common.sh@10 -- # set +x 00:22:26.935 ************************************ 00:22:26.935 END TEST nvmf_async_init 00:22:26.935 ************************************ 00:22:26.935 15:46:05 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:26.935 15:46:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:26.935 15:46:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:26.935 15:46:05 -- common/autotest_common.sh@10 -- # set +x 00:22:26.935 ************************************ 00:22:26.935 START TEST dma 00:22:26.935 ************************************ 00:22:26.935 15:46:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:26.935 * Looking for test storage... 00:22:26.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:26.935 15:46:05 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.935 15:46:05 -- nvmf/common.sh@7 -- # uname -s 00:22:26.935 15:46:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.935 15:46:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.935 15:46:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.935 15:46:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.935 15:46:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.935 15:46:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.935 15:46:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.935 15:46:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.935 15:46:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.935 15:46:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.935 15:46:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.935 15:46:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.935 15:46:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.935 15:46:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.935 15:46:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.935 15:46:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.935 15:46:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.935 15:46:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.935 15:46:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.935 15:46:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.935 15:46:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.936 15:46:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.936 15:46:05 -- paths/export.sh@5 -- # export PATH 00:22:26.936 15:46:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.936 15:46:05 -- nvmf/common.sh@46 -- # : 0 00:22:26.936 15:46:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:26.936 15:46:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:26.936 15:46:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:26.936 15:46:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.936 15:46:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.936 15:46:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:26.936 15:46:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:26.936 15:46:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:26.936 15:46:05 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:26.936 15:46:05 -- host/dma.sh@13 -- # exit 0 00:22:26.936 00:22:26.936 real 0m0.067s 00:22:26.936 user 0m0.029s 00:22:26.936 sys 0m0.044s 00:22:26.936 15:46:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:26.936 15:46:05 -- common/autotest_common.sh@10 -- # set +x 00:22:26.936 ************************************ 00:22:26.936 END TEST dma 00:22:26.936 ************************************ 00:22:26.936 15:46:05 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:26.936 15:46:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:26.936 15:46:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:26.936 15:46:05 -- common/autotest_common.sh@10 -- # set +x 00:22:26.936 ************************************ 00:22:26.936 START TEST nvmf_identify 00:22:26.936 ************************************ 00:22:26.936 15:46:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:26.936 * Looking for test storage... 00:22:26.936 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:26.936 15:46:05 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.936 15:46:05 -- nvmf/common.sh@7 -- # uname -s 00:22:26.936 15:46:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.936 15:46:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.936 15:46:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.936 15:46:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.936 15:46:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.936 15:46:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.936 15:46:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.936 15:46:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.936 15:46:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.936 15:46:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.936 15:46:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.936 15:46:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.936 15:46:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.936 15:46:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.936 15:46:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.936 15:46:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.936 15:46:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.936 15:46:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.936 15:46:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.936 15:46:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.936 15:46:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.936 15:46:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.936 15:46:05 -- paths/export.sh@5 -- # export PATH 00:22:26.936 15:46:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.936 15:46:05 -- nvmf/common.sh@46 -- # : 0 00:22:26.936 15:46:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:26.936 15:46:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:26.936 15:46:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:26.936 15:46:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.936 15:46:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.936 15:46:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:26.936 15:46:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:26.936 15:46:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:26.936 15:46:05 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:26.936 15:46:05 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:26.936 15:46:05 -- host/identify.sh@14 -- # nvmftestinit 00:22:26.936 15:46:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:26.936 15:46:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.936 15:46:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:26.936 15:46:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:26.936 15:46:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:26.936 15:46:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.936 15:46:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.936 15:46:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.936 15:46:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:26.936 15:46:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:26.936 15:46:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:26.936 15:46:05 -- common/autotest_common.sh@10 -- # set +x 00:22:28.839 15:46:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:28.839 15:46:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:28.839 15:46:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:28.839 15:46:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:28.839 15:46:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:28.839 15:46:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:28.839 15:46:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:28.839 15:46:07 -- nvmf/common.sh@294 -- # net_devs=() 00:22:28.839 15:46:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:28.839 15:46:07 -- nvmf/common.sh@295 -- # e810=() 00:22:28.839 15:46:07 -- nvmf/common.sh@295 -- # local -ga e810 00:22:28.839 15:46:07 -- nvmf/common.sh@296 -- # x722=() 00:22:28.839 15:46:07 -- nvmf/common.sh@296 -- # local -ga x722 00:22:28.839 15:46:07 -- nvmf/common.sh@297 -- # mlx=() 00:22:28.839 15:46:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:28.839 15:46:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.839 15:46:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.839 15:46:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.839 15:46:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.839 15:46:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.839 15:46:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.839 15:46:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.839 15:46:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.839 15:46:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.839 15:46:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.839 15:46:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.839 15:46:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:28.839 15:46:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:28.839 15:46:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:28.839 15:46:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:28.839 15:46:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:28.839 15:46:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:28.839 15:46:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:28.839 15:46:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:28.839 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:28.839 15:46:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:28.839 15:46:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:28.839 15:46:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.839 15:46:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.839 15:46:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:28.839 15:46:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:28.839 15:46:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:28.839 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:28.839 15:46:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:28.839 15:46:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:28.839 15:46:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.839 15:46:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.839 15:46:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:28.839 15:46:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:28.839 15:46:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:28.839 15:46:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:28.839 15:46:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:28.839 15:46:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.839 15:46:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:28.839 15:46:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.839 15:46:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:28.839 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:28.839 15:46:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.839 15:46:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:28.839 15:46:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.839 15:46:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:28.839 15:46:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.839 15:46:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:28.839 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:28.839 15:46:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.839 15:46:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:28.839 15:46:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:28.839 15:46:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:28.839 15:46:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:28.839 15:46:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:28.839 15:46:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.839 15:46:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.839 15:46:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.839 15:46:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:28.839 15:46:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.839 15:46:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.839 15:46:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:28.839 15:46:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.839 15:46:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.839 15:46:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:28.839 15:46:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:28.839 15:46:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.839 15:46:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.839 15:46:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.839 15:46:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.839 15:46:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:28.839 15:46:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.839 15:46:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.839 15:46:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.839 15:46:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:28.839 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.839 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:22:28.839 00:22:28.839 --- 10.0.0.2 ping statistics --- 00:22:28.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.839 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:22:28.839 15:46:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.839 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.839 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:22:28.839 00:22:28.839 --- 10.0.0.1 ping statistics --- 00:22:28.839 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.839 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:22:28.839 15:46:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.839 15:46:08 -- nvmf/common.sh@410 -- # return 0 00:22:28.839 15:46:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:28.839 15:46:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.839 15:46:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:28.839 15:46:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:28.839 15:46:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.839 15:46:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:28.839 15:46:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:28.839 15:46:08 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:28.839 15:46:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:28.839 15:46:08 -- common/autotest_common.sh@10 -- # set +x 00:22:28.839 15:46:08 -- host/identify.sh@19 -- # nvmfpid=2188959 00:22:28.839 15:46:08 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:28.839 15:46:08 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:28.839 15:46:08 -- host/identify.sh@23 -- # waitforlisten 2188959 00:22:28.839 15:46:08 -- common/autotest_common.sh@819 -- # '[' -z 2188959 ']' 00:22:28.839 15:46:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.839 15:46:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:28.839 15:46:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.839 15:46:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:28.839 15:46:08 -- common/autotest_common.sh@10 -- # set +x 00:22:28.839 [2024-07-10 15:46:08.114942] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:28.839 [2024-07-10 15:46:08.115025] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.839 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.839 [2024-07-10 15:46:08.183811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:29.097 [2024-07-10 15:46:08.301154] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:29.097 [2024-07-10 15:46:08.301326] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:29.097 [2024-07-10 15:46:08.301345] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:29.097 [2024-07-10 15:46:08.301360] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:29.097 [2024-07-10 15:46:08.301445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.097 [2024-07-10 15:46:08.301510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.097 [2024-07-10 15:46:08.301594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:29.097 [2024-07-10 15:46:08.301596] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.034 15:46:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:30.034 15:46:09 -- common/autotest_common.sh@852 -- # return 0 00:22:30.034 15:46:09 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:30.034 15:46:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:30.034 15:46:09 -- common/autotest_common.sh@10 -- # set +x 00:22:30.034 [2024-07-10 15:46:09.049850] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:30.034 15:46:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:30.034 15:46:09 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:30.034 15:46:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:30.034 15:46:09 -- common/autotest_common.sh@10 -- # set +x 00:22:30.034 15:46:09 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:30.034 15:46:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:30.034 15:46:09 -- common/autotest_common.sh@10 -- # set +x 00:22:30.034 Malloc0 00:22:30.034 15:46:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:30.034 15:46:09 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:30.034 15:46:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:30.035 15:46:09 -- common/autotest_common.sh@10 -- # set +x 00:22:30.035 15:46:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:30.035 15:46:09 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:30.035 15:46:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:30.035 15:46:09 -- common/autotest_common.sh@10 -- # set +x 00:22:30.035 15:46:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:30.035 15:46:09 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:30.035 15:46:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:30.035 15:46:09 -- common/autotest_common.sh@10 -- # set +x 00:22:30.035 [2024-07-10 15:46:09.116919] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:30.035 15:46:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:30.035 15:46:09 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:30.035 15:46:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:30.035 15:46:09 -- common/autotest_common.sh@10 -- # set +x 00:22:30.035 15:46:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:30.035 15:46:09 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:30.035 15:46:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:30.035 15:46:09 -- common/autotest_common.sh@10 -- # set +x 00:22:30.035 [2024-07-10 15:46:09.132698] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:22:30.035 [ 00:22:30.035 { 00:22:30.035 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:30.035 "subtype": "Discovery", 00:22:30.035 "listen_addresses": [ 00:22:30.035 { 00:22:30.035 "transport": "TCP", 00:22:30.035 "trtype": "TCP", 00:22:30.035 "adrfam": "IPv4", 00:22:30.035 "traddr": "10.0.0.2", 00:22:30.035 "trsvcid": "4420" 00:22:30.035 } 00:22:30.035 ], 00:22:30.035 "allow_any_host": true, 00:22:30.035 "hosts": [] 00:22:30.035 }, 00:22:30.035 { 00:22:30.035 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:30.035 "subtype": "NVMe", 00:22:30.035 "listen_addresses": [ 00:22:30.035 { 00:22:30.035 "transport": "TCP", 00:22:30.035 "trtype": "TCP", 00:22:30.035 "adrfam": "IPv4", 00:22:30.035 "traddr": "10.0.0.2", 00:22:30.035 "trsvcid": "4420" 00:22:30.035 } 00:22:30.035 ], 00:22:30.035 "allow_any_host": true, 00:22:30.035 "hosts": [], 00:22:30.035 "serial_number": "SPDK00000000000001", 00:22:30.035 "model_number": "SPDK bdev Controller", 00:22:30.035 "max_namespaces": 32, 00:22:30.035 "min_cntlid": 1, 00:22:30.035 "max_cntlid": 65519, 00:22:30.035 "namespaces": [ 00:22:30.035 { 00:22:30.035 "nsid": 1, 00:22:30.035 "bdev_name": "Malloc0", 00:22:30.035 "name": "Malloc0", 00:22:30.035 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:30.035 "eui64": "ABCDEF0123456789", 00:22:30.035 "uuid": "41793530-bd02-4c7a-9dbb-fecbe0d418ed" 00:22:30.035 } 00:22:30.035 ] 00:22:30.035 } 00:22:30.035 ] 00:22:30.035 15:46:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:30.035 15:46:09 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:30.035 [2024-07-10 15:46:09.153762] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:30.035 [2024-07-10 15:46:09.153809] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189119 ] 00:22:30.035 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.035 [2024-07-10 15:46:09.188587] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:30.035 [2024-07-10 15:46:09.188639] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:30.035 [2024-07-10 15:46:09.188649] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:30.035 [2024-07-10 15:46:09.188664] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:30.035 [2024-07-10 15:46:09.188676] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:30.035 [2024-07-10 15:46:09.189015] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:30.035 [2024-07-10 15:46:09.189072] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x17bbe10 0 00:22:30.035 [2024-07-10 15:46:09.203460] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:30.035 [2024-07-10 15:46:09.203483] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:30.035 [2024-07-10 15:46:09.203494] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:30.035 [2024-07-10 15:46:09.203503] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:30.035 [2024-07-10 15:46:09.203562] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.035 [2024-07-10 15:46:09.203581] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.035 [2024-07-10 15:46:09.203592] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17bbe10) 00:22:30.035 [2024-07-10 15:46:09.203612] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:30.035 [2024-07-10 15:46:09.203644] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183bbf0, cid 0, qid 0 00:22:30.035 [2024-07-10 15:46:09.211439] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.035 [2024-07-10 15:46:09.211458] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.035 [2024-07-10 15:46:09.211471] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.035 [2024-07-10 15:46:09.211478] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183bbf0) on tqpair=0x17bbe10 00:22:30.035 [2024-07-10 15:46:09.211493] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:30.035 [2024-07-10 15:46:09.211520] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:30.035 [2024-07-10 15:46:09.211529] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:30.035 [2024-07-10 15:46:09.211548] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.035 [2024-07-10 15:46:09.211557] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.035 [2024-07-10 15:46:09.211564] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17bbe10) 00:22:30.035 [2024-07-10 15:46:09.211575] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.035 [2024-07-10 15:46:09.211599] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183bbf0, cid 0, qid 0 00:22:30.035 [2024-07-10 15:46:09.211773] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.035 [2024-07-10 15:46:09.211789] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.035 [2024-07-10 15:46:09.211796] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.035 [2024-07-10 15:46:09.211802] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183bbf0) on tqpair=0x17bbe10 00:22:30.035 [2024-07-10 15:46:09.211812] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:30.035 [2024-07-10 15:46:09.211826] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:30.035 [2024-07-10 15:46:09.211838] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.035 [2024-07-10 15:46:09.211846] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.035 [2024-07-10 15:46:09.211852] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17bbe10) 00:22:30.035 [2024-07-10 15:46:09.211862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.035 [2024-07-10 15:46:09.211884] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183bbf0, cid 0, qid 0 00:22:30.035 [2024-07-10 15:46:09.212029] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.035 [2024-07-10 15:46:09.212044] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.035 [2024-07-10 15:46:09.212051] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.035 [2024-07-10 15:46:09.212058] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183bbf0) on tqpair=0x17bbe10 00:22:30.035 [2024-07-10 15:46:09.212068] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:30.035 [2024-07-10 15:46:09.212082] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:30.035 [2024-07-10 15:46:09.212095] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.035 [2024-07-10 15:46:09.212102] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.035 [2024-07-10 15:46:09.212113] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17bbe10) 00:22:30.035 [2024-07-10 15:46:09.212124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.035 [2024-07-10 15:46:09.212145] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183bbf0, cid 0, qid 0 00:22:30.035 [2024-07-10 15:46:09.212276] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.035 [2024-07-10 15:46:09.212291] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.035 [2024-07-10 15:46:09.212298] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.035 [2024-07-10 15:46:09.212305] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183bbf0) on tqpair=0x17bbe10 00:22:30.035 [2024-07-10 15:46:09.212315] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:30.035 [2024-07-10 15:46:09.212332] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.035 [2024-07-10 15:46:09.212341] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.035 [2024-07-10 15:46:09.212347] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17bbe10) 00:22:30.035 [2024-07-10 15:46:09.212357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.035 [2024-07-10 15:46:09.212378] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183bbf0, cid 0, qid 0 00:22:30.035 [2024-07-10 15:46:09.212510] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.035 [2024-07-10 15:46:09.212525] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.035 [2024-07-10 15:46:09.212532] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.035 [2024-07-10 15:46:09.212539] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183bbf0) on tqpair=0x17bbe10 00:22:30.035 [2024-07-10 15:46:09.212549] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:30.035 [2024-07-10 15:46:09.212557] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:30.035 [2024-07-10 15:46:09.212570] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:30.035 [2024-07-10 15:46:09.212681] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:30.035 [2024-07-10 15:46:09.212689] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:30.036 [2024-07-10 15:46:09.212703] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.212711] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.212717] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17bbe10) 00:22:30.036 [2024-07-10 15:46:09.212742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.036 [2024-07-10 15:46:09.212764] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183bbf0, cid 0, qid 0 00:22:30.036 [2024-07-10 15:46:09.212954] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.036 [2024-07-10 15:46:09.212970] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.036 [2024-07-10 15:46:09.212977] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.212983] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183bbf0) on tqpair=0x17bbe10 00:22:30.036 [2024-07-10 15:46:09.212993] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:30.036 [2024-07-10 15:46:09.213014] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.213024] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.213030] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17bbe10) 00:22:30.036 [2024-07-10 15:46:09.213041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.036 [2024-07-10 15:46:09.213062] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183bbf0, cid 0, qid 0 00:22:30.036 [2024-07-10 15:46:09.213188] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.036 [2024-07-10 15:46:09.213203] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.036 [2024-07-10 15:46:09.213210] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.213217] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183bbf0) on tqpair=0x17bbe10 00:22:30.036 [2024-07-10 15:46:09.213226] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:30.036 [2024-07-10 15:46:09.213235] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:30.036 [2024-07-10 15:46:09.213248] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:30.036 [2024-07-10 15:46:09.213263] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:30.036 [2024-07-10 15:46:09.213278] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.213286] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.213292] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17bbe10) 00:22:30.036 [2024-07-10 15:46:09.213303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.036 [2024-07-10 15:46:09.213324] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183bbf0, cid 0, qid 0 00:22:30.036 [2024-07-10 15:46:09.213539] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:30.036 [2024-07-10 15:46:09.213555] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:30.036 [2024-07-10 15:46:09.213562] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.213569] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17bbe10): datao=0, datal=4096, cccid=0 00:22:30.036 [2024-07-10 15:46:09.213577] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x183bbf0) on tqpair(0x17bbe10): expected_datao=0, payload_size=4096 00:22:30.036 [2024-07-10 15:46:09.213604] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.213614] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.213699] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.036 [2024-07-10 15:46:09.213714] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.036 [2024-07-10 15:46:09.213720] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.213727] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183bbf0) on tqpair=0x17bbe10 00:22:30.036 [2024-07-10 15:46:09.213741] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:30.036 [2024-07-10 15:46:09.213750] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:30.036 [2024-07-10 15:46:09.213758] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:30.036 [2024-07-10 15:46:09.213766] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:30.036 [2024-07-10 15:46:09.213778] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:30.036 [2024-07-10 15:46:09.213787] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:30.036 [2024-07-10 15:46:09.213805] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:30.036 [2024-07-10 15:46:09.213819] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.213827] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.213833] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17bbe10) 00:22:30.036 [2024-07-10 15:46:09.213844] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:30.036 [2024-07-10 15:46:09.213865] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183bbf0, cid 0, qid 0 00:22:30.036 [2024-07-10 15:46:09.214032] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.036 [2024-07-10 15:46:09.214047] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.036 [2024-07-10 15:46:09.214054] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.214061] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183bbf0) on tqpair=0x17bbe10 00:22:30.036 [2024-07-10 15:46:09.214074] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.214082] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.214088] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17bbe10) 00:22:30.036 [2024-07-10 15:46:09.214098] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.036 [2024-07-10 15:46:09.214108] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.214115] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.214121] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x17bbe10) 00:22:30.036 [2024-07-10 15:46:09.214130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.036 [2024-07-10 15:46:09.214140] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.214147] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.214153] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x17bbe10) 00:22:30.036 [2024-07-10 15:46:09.214162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.036 [2024-07-10 15:46:09.214171] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.214178] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.214184] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bbe10) 00:22:30.036 [2024-07-10 15:46:09.214193] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.036 [2024-07-10 15:46:09.214202] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:30.036 [2024-07-10 15:46:09.214221] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:30.036 [2024-07-10 15:46:09.214233] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.214241] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.214247] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17bbe10) 00:22:30.036 [2024-07-10 15:46:09.214275] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.036 [2024-07-10 15:46:09.214299] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183bbf0, cid 0, qid 0 00:22:30.036 [2024-07-10 15:46:09.214310] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183bd50, cid 1, qid 0 00:22:30.036 [2024-07-10 15:46:09.214318] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183beb0, cid 2, qid 0 00:22:30.036 [2024-07-10 15:46:09.214340] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c010, cid 3, qid 0 00:22:30.036 [2024-07-10 15:46:09.214348] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c170, cid 4, qid 0 00:22:30.036 [2024-07-10 15:46:09.214571] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.036 [2024-07-10 15:46:09.214587] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.036 [2024-07-10 15:46:09.214593] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.214600] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c170) on tqpair=0x17bbe10 00:22:30.036 [2024-07-10 15:46:09.214610] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:30.036 [2024-07-10 15:46:09.214619] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:30.036 [2024-07-10 15:46:09.214637] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.214646] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.214653] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17bbe10) 00:22:30.036 [2024-07-10 15:46:09.214663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.036 [2024-07-10 15:46:09.214684] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c170, cid 4, qid 0 00:22:30.036 [2024-07-10 15:46:09.214873] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:30.036 [2024-07-10 15:46:09.214889] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:30.036 [2024-07-10 15:46:09.214896] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.214902] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17bbe10): datao=0, datal=4096, cccid=4 00:22:30.036 [2024-07-10 15:46:09.214910] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x183c170) on tqpair(0x17bbe10): expected_datao=0, payload_size=4096 00:22:30.036 [2024-07-10 15:46:09.214927] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.214936] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.259439] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.036 [2024-07-10 15:46:09.259457] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.036 [2024-07-10 15:46:09.259465] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.259472] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c170) on tqpair=0x17bbe10 00:22:30.036 [2024-07-10 15:46:09.259493] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:30.036 [2024-07-10 15:46:09.259532] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.036 [2024-07-10 15:46:09.259543] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.037 [2024-07-10 15:46:09.259550] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17bbe10) 00:22:30.037 [2024-07-10 15:46:09.259561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.037 [2024-07-10 15:46:09.259573] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.037 [2024-07-10 15:46:09.259585] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.037 [2024-07-10 15:46:09.259592] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17bbe10) 00:22:30.037 [2024-07-10 15:46:09.259602] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.037 [2024-07-10 15:46:09.259630] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c170, cid 4, qid 0 00:22:30.037 [2024-07-10 15:46:09.259642] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c2d0, cid 5, qid 0 00:22:30.037 [2024-07-10 15:46:09.259846] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:30.037 [2024-07-10 15:46:09.259862] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:30.037 [2024-07-10 15:46:09.259869] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:30.037 [2024-07-10 15:46:09.259875] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17bbe10): datao=0, datal=1024, cccid=4 00:22:30.037 [2024-07-10 15:46:09.259883] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x183c170) on tqpair(0x17bbe10): expected_datao=0, payload_size=1024 00:22:30.037 [2024-07-10 15:46:09.259894] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:30.037 [2024-07-10 15:46:09.259901] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:30.037 [2024-07-10 15:46:09.259910] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.037 [2024-07-10 15:46:09.259919] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.037 [2024-07-10 15:46:09.259925] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.037 [2024-07-10 15:46:09.259932] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c2d0) on tqpair=0x17bbe10 00:22:30.037 [2024-07-10 15:46:09.303442] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.037 [2024-07-10 15:46:09.303459] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.037 [2024-07-10 15:46:09.303467] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.037 [2024-07-10 15:46:09.303473] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c170) on tqpair=0x17bbe10 00:22:30.037 [2024-07-10 15:46:09.303492] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.037 [2024-07-10 15:46:09.303501] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.037 [2024-07-10 15:46:09.303507] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17bbe10) 00:22:30.037 [2024-07-10 15:46:09.303518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.037 [2024-07-10 15:46:09.303562] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c170, cid 4, qid 0 00:22:30.037 [2024-07-10 15:46:09.303734] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:30.037 [2024-07-10 15:46:09.303749] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:30.037 [2024-07-10 15:46:09.303756] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:30.037 [2024-07-10 15:46:09.303763] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17bbe10): datao=0, datal=3072, cccid=4 00:22:30.037 [2024-07-10 15:46:09.303771] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x183c170) on tqpair(0x17bbe10): expected_datao=0, payload_size=3072 00:22:30.037 [2024-07-10 15:46:09.303806] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:30.037 [2024-07-10 15:46:09.303816] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:30.037 [2024-07-10 15:46:09.303906] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.037 [2024-07-10 15:46:09.303918] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.037 [2024-07-10 15:46:09.303925] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.037 [2024-07-10 15:46:09.303931] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c170) on tqpair=0x17bbe10 00:22:30.037 [2024-07-10 15:46:09.303948] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.037 [2024-07-10 15:46:09.303961] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.037 [2024-07-10 15:46:09.303968] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17bbe10) 00:22:30.037 [2024-07-10 15:46:09.303979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.037 [2024-07-10 15:46:09.304007] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c170, cid 4, qid 0 00:22:30.037 [2024-07-10 15:46:09.304146] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:30.037 [2024-07-10 15:46:09.304158] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:30.037 [2024-07-10 15:46:09.304165] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:30.037 [2024-07-10 15:46:09.304171] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17bbe10): datao=0, datal=8, cccid=4 00:22:30.037 [2024-07-10 15:46:09.304179] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x183c170) on tqpair(0x17bbe10): expected_datao=0, payload_size=8 00:22:30.037 [2024-07-10 15:46:09.304190] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:30.037 [2024-07-10 15:46:09.304197] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:30.037 [2024-07-10 15:46:09.347452] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.037 [2024-07-10 15:46:09.347470] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.037 [2024-07-10 15:46:09.347477] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.037 [2024-07-10 15:46:09.347484] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c170) on tqpair=0x17bbe10 00:22:30.037 ===================================================== 00:22:30.037 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:30.037 ===================================================== 00:22:30.037 Controller Capabilities/Features 00:22:30.037 ================================ 00:22:30.037 Vendor ID: 0000 00:22:30.037 Subsystem Vendor ID: 0000 00:22:30.037 Serial Number: .................... 00:22:30.037 Model Number: ........................................ 00:22:30.037 Firmware Version: 24.01.1 00:22:30.037 Recommended Arb Burst: 0 00:22:30.037 IEEE OUI Identifier: 00 00 00 00:22:30.037 Multi-path I/O 00:22:30.037 May have multiple subsystem ports: No 00:22:30.037 May have multiple controllers: No 00:22:30.037 Associated with SR-IOV VF: No 00:22:30.037 Max Data Transfer Size: 131072 00:22:30.037 Max Number of Namespaces: 0 00:22:30.037 Max Number of I/O Queues: 1024 00:22:30.037 NVMe Specification Version (VS): 1.3 00:22:30.037 NVMe Specification Version (Identify): 1.3 00:22:30.037 Maximum Queue Entries: 128 00:22:30.037 Contiguous Queues Required: Yes 00:22:30.037 Arbitration Mechanisms Supported 00:22:30.037 Weighted Round Robin: Not Supported 00:22:30.037 Vendor Specific: Not Supported 00:22:30.037 Reset Timeout: 15000 ms 00:22:30.037 Doorbell Stride: 4 bytes 00:22:30.037 NVM Subsystem Reset: Not Supported 00:22:30.037 Command Sets Supported 00:22:30.037 NVM Command Set: Supported 00:22:30.037 Boot Partition: Not Supported 00:22:30.037 Memory Page Size Minimum: 4096 bytes 00:22:30.037 Memory Page Size Maximum: 4096 bytes 00:22:30.037 Persistent Memory Region: Not Supported 00:22:30.037 Optional Asynchronous Events Supported 00:22:30.037 Namespace Attribute Notices: Not Supported 00:22:30.037 Firmware Activation Notices: Not Supported 00:22:30.037 ANA Change Notices: Not Supported 00:22:30.037 PLE Aggregate Log Change Notices: Not Supported 00:22:30.037 LBA Status Info Alert Notices: Not Supported 00:22:30.037 EGE Aggregate Log Change Notices: Not Supported 00:22:30.037 Normal NVM Subsystem Shutdown event: Not Supported 00:22:30.037 Zone Descriptor Change Notices: Not Supported 00:22:30.037 Discovery Log Change Notices: Supported 00:22:30.037 Controller Attributes 00:22:30.037 128-bit Host Identifier: Not Supported 00:22:30.037 Non-Operational Permissive Mode: Not Supported 00:22:30.037 NVM Sets: Not Supported 00:22:30.037 Read Recovery Levels: Not Supported 00:22:30.037 Endurance Groups: Not Supported 00:22:30.037 Predictable Latency Mode: Not Supported 00:22:30.037 Traffic Based Keep ALive: Not Supported 00:22:30.037 Namespace Granularity: Not Supported 00:22:30.037 SQ Associations: Not Supported 00:22:30.037 UUID List: Not Supported 00:22:30.037 Multi-Domain Subsystem: Not Supported 00:22:30.037 Fixed Capacity Management: Not Supported 00:22:30.037 Variable Capacity Management: Not Supported 00:22:30.037 Delete Endurance Group: Not Supported 00:22:30.037 Delete NVM Set: Not Supported 00:22:30.037 Extended LBA Formats Supported: Not Supported 00:22:30.037 Flexible Data Placement Supported: Not Supported 00:22:30.037 00:22:30.037 Controller Memory Buffer Support 00:22:30.037 ================================ 00:22:30.037 Supported: No 00:22:30.037 00:22:30.037 Persistent Memory Region Support 00:22:30.037 ================================ 00:22:30.037 Supported: No 00:22:30.037 00:22:30.037 Admin Command Set Attributes 00:22:30.037 ============================ 00:22:30.037 Security Send/Receive: Not Supported 00:22:30.037 Format NVM: Not Supported 00:22:30.037 Firmware Activate/Download: Not Supported 00:22:30.037 Namespace Management: Not Supported 00:22:30.037 Device Self-Test: Not Supported 00:22:30.037 Directives: Not Supported 00:22:30.037 NVMe-MI: Not Supported 00:22:30.037 Virtualization Management: Not Supported 00:22:30.037 Doorbell Buffer Config: Not Supported 00:22:30.037 Get LBA Status Capability: Not Supported 00:22:30.037 Command & Feature Lockdown Capability: Not Supported 00:22:30.037 Abort Command Limit: 1 00:22:30.037 Async Event Request Limit: 4 00:22:30.037 Number of Firmware Slots: N/A 00:22:30.037 Firmware Slot 1 Read-Only: N/A 00:22:30.037 Firmware Activation Without Reset: N/A 00:22:30.037 Multiple Update Detection Support: N/A 00:22:30.037 Firmware Update Granularity: No Information Provided 00:22:30.037 Per-Namespace SMART Log: No 00:22:30.037 Asymmetric Namespace Access Log Page: Not Supported 00:22:30.037 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:30.037 Command Effects Log Page: Not Supported 00:22:30.037 Get Log Page Extended Data: Supported 00:22:30.037 Telemetry Log Pages: Not Supported 00:22:30.037 Persistent Event Log Pages: Not Supported 00:22:30.037 Supported Log Pages Log Page: May Support 00:22:30.037 Commands Supported & Effects Log Page: Not Supported 00:22:30.037 Feature Identifiers & Effects Log Page:May Support 00:22:30.038 NVMe-MI Commands & Effects Log Page: May Support 00:22:30.038 Data Area 4 for Telemetry Log: Not Supported 00:22:30.038 Error Log Page Entries Supported: 128 00:22:30.038 Keep Alive: Not Supported 00:22:30.038 00:22:30.038 NVM Command Set Attributes 00:22:30.038 ========================== 00:22:30.038 Submission Queue Entry Size 00:22:30.038 Max: 1 00:22:30.038 Min: 1 00:22:30.038 Completion Queue Entry Size 00:22:30.038 Max: 1 00:22:30.038 Min: 1 00:22:30.038 Number of Namespaces: 0 00:22:30.038 Compare Command: Not Supported 00:22:30.038 Write Uncorrectable Command: Not Supported 00:22:30.038 Dataset Management Command: Not Supported 00:22:30.038 Write Zeroes Command: Not Supported 00:22:30.038 Set Features Save Field: Not Supported 00:22:30.038 Reservations: Not Supported 00:22:30.038 Timestamp: Not Supported 00:22:30.038 Copy: Not Supported 00:22:30.038 Volatile Write Cache: Not Present 00:22:30.038 Atomic Write Unit (Normal): 1 00:22:30.038 Atomic Write Unit (PFail): 1 00:22:30.038 Atomic Compare & Write Unit: 1 00:22:30.038 Fused Compare & Write: Supported 00:22:30.038 Scatter-Gather List 00:22:30.038 SGL Command Set: Supported 00:22:30.038 SGL Keyed: Supported 00:22:30.038 SGL Bit Bucket Descriptor: Not Supported 00:22:30.038 SGL Metadata Pointer: Not Supported 00:22:30.038 Oversized SGL: Not Supported 00:22:30.038 SGL Metadata Address: Not Supported 00:22:30.038 SGL Offset: Supported 00:22:30.038 Transport SGL Data Block: Not Supported 00:22:30.038 Replay Protected Memory Block: Not Supported 00:22:30.038 00:22:30.038 Firmware Slot Information 00:22:30.038 ========================= 00:22:30.038 Active slot: 0 00:22:30.038 00:22:30.038 00:22:30.038 Error Log 00:22:30.038 ========= 00:22:30.038 00:22:30.038 Active Namespaces 00:22:30.038 ================= 00:22:30.038 Discovery Log Page 00:22:30.038 ================== 00:22:30.038 Generation Counter: 2 00:22:30.038 Number of Records: 2 00:22:30.038 Record Format: 0 00:22:30.038 00:22:30.038 Discovery Log Entry 0 00:22:30.038 ---------------------- 00:22:30.038 Transport Type: 3 (TCP) 00:22:30.038 Address Family: 1 (IPv4) 00:22:30.038 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:30.038 Entry Flags: 00:22:30.038 Duplicate Returned Information: 1 00:22:30.038 Explicit Persistent Connection Support for Discovery: 1 00:22:30.038 Transport Requirements: 00:22:30.038 Secure Channel: Not Required 00:22:30.038 Port ID: 0 (0x0000) 00:22:30.038 Controller ID: 65535 (0xffff) 00:22:30.038 Admin Max SQ Size: 128 00:22:30.038 Transport Service Identifier: 4420 00:22:30.038 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:30.038 Transport Address: 10.0.0.2 00:22:30.038 Discovery Log Entry 1 00:22:30.038 ---------------------- 00:22:30.038 Transport Type: 3 (TCP) 00:22:30.038 Address Family: 1 (IPv4) 00:22:30.038 Subsystem Type: 2 (NVM Subsystem) 00:22:30.038 Entry Flags: 00:22:30.038 Duplicate Returned Information: 0 00:22:30.038 Explicit Persistent Connection Support for Discovery: 0 00:22:30.038 Transport Requirements: 00:22:30.038 Secure Channel: Not Required 00:22:30.038 Port ID: 0 (0x0000) 00:22:30.038 Controller ID: 65535 (0xffff) 00:22:30.038 Admin Max SQ Size: 128 00:22:30.038 Transport Service Identifier: 4420 00:22:30.038 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:30.038 Transport Address: 10.0.0.2 [2024-07-10 15:46:09.347612] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:30.038 [2024-07-10 15:46:09.347637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.038 [2024-07-10 15:46:09.347649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.038 [2024-07-10 15:46:09.347659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.038 [2024-07-10 15:46:09.347669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.038 [2024-07-10 15:46:09.347682] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.038 [2024-07-10 15:46:09.347690] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.038 [2024-07-10 15:46:09.347697] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bbe10) 00:22:30.038 [2024-07-10 15:46:09.347708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.038 [2024-07-10 15:46:09.347733] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c010, cid 3, qid 0 00:22:30.038 [2024-07-10 15:46:09.347893] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.038 [2024-07-10 15:46:09.347909] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.038 [2024-07-10 15:46:09.347916] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.038 [2024-07-10 15:46:09.347922] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c010) on tqpair=0x17bbe10 00:22:30.038 [2024-07-10 15:46:09.347935] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.038 [2024-07-10 15:46:09.347943] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.038 [2024-07-10 15:46:09.347950] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bbe10) 00:22:30.038 [2024-07-10 15:46:09.347960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.038 [2024-07-10 15:46:09.347986] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c010, cid 3, qid 0 00:22:30.038 [2024-07-10 15:46:09.348171] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.038 [2024-07-10 15:46:09.348184] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.038 [2024-07-10 15:46:09.348190] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.038 [2024-07-10 15:46:09.348197] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c010) on tqpair=0x17bbe10 00:22:30.038 [2024-07-10 15:46:09.348207] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:30.038 [2024-07-10 15:46:09.348215] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:30.038 [2024-07-10 15:46:09.348230] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.038 [2024-07-10 15:46:09.348239] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.038 [2024-07-10 15:46:09.348246] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bbe10) 00:22:30.038 [2024-07-10 15:46:09.348256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.038 [2024-07-10 15:46:09.348277] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c010, cid 3, qid 0 00:22:30.038 [2024-07-10 15:46:09.348447] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.038 [2024-07-10 15:46:09.348472] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.038 [2024-07-10 15:46:09.348480] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.038 [2024-07-10 15:46:09.348487] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c010) on tqpair=0x17bbe10 00:22:30.038 [2024-07-10 15:46:09.348506] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.038 [2024-07-10 15:46:09.348516] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.038 [2024-07-10 15:46:09.348522] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bbe10) 00:22:30.038 [2024-07-10 15:46:09.348533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.038 [2024-07-10 15:46:09.348554] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c010, cid 3, qid 0 00:22:30.038 [2024-07-10 15:46:09.348700] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.038 [2024-07-10 15:46:09.348715] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.038 [2024-07-10 15:46:09.348722] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.038 [2024-07-10 15:46:09.348729] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c010) on tqpair=0x17bbe10 00:22:30.038 [2024-07-10 15:46:09.348746] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.038 [2024-07-10 15:46:09.348756] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.038 [2024-07-10 15:46:09.348762] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bbe10) 00:22:30.038 [2024-07-10 15:46:09.348773] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.038 [2024-07-10 15:46:09.348793] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c010, cid 3, qid 0 00:22:30.038 [2024-07-10 15:46:09.348921] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.038 [2024-07-10 15:46:09.348933] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.038 [2024-07-10 15:46:09.348940] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.038 [2024-07-10 15:46:09.348947] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c010) on tqpair=0x17bbe10 00:22:30.038 [2024-07-10 15:46:09.348964] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.038 [2024-07-10 15:46:09.348974] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.038 [2024-07-10 15:46:09.348980] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bbe10) 00:22:30.038 [2024-07-10 15:46:09.348990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.038 [2024-07-10 15:46:09.349015] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c010, cid 3, qid 0 00:22:30.038 [2024-07-10 15:46:09.349142] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.038 [2024-07-10 15:46:09.349157] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.038 [2024-07-10 15:46:09.349164] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.038 [2024-07-10 15:46:09.349171] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c010) on tqpair=0x17bbe10 00:22:30.038 [2024-07-10 15:46:09.349189] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.038 [2024-07-10 15:46:09.349198] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.038 [2024-07-10 15:46:09.349205] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bbe10) 00:22:30.038 [2024-07-10 15:46:09.349215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.038 [2024-07-10 15:46:09.349236] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c010, cid 3, qid 0 00:22:30.038 [2024-07-10 15:46:09.349362] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.039 [2024-07-10 15:46:09.349374] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.039 [2024-07-10 15:46:09.349381] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.349388] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c010) on tqpair=0x17bbe10 00:22:30.039 [2024-07-10 15:46:09.349405] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.349415] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.349421] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bbe10) 00:22:30.039 [2024-07-10 15:46:09.349439] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.039 [2024-07-10 15:46:09.349461] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c010, cid 3, qid 0 00:22:30.039 [2024-07-10 15:46:09.349605] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.039 [2024-07-10 15:46:09.349617] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.039 [2024-07-10 15:46:09.349624] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.349630] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c010) on tqpair=0x17bbe10 00:22:30.039 [2024-07-10 15:46:09.349647] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.349657] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.349663] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bbe10) 00:22:30.039 [2024-07-10 15:46:09.349674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.039 [2024-07-10 15:46:09.349694] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c010, cid 3, qid 0 00:22:30.039 [2024-07-10 15:46:09.349817] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.039 [2024-07-10 15:46:09.349829] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.039 [2024-07-10 15:46:09.349836] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.349843] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c010) on tqpair=0x17bbe10 00:22:30.039 [2024-07-10 15:46:09.349860] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.349869] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.349876] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bbe10) 00:22:30.039 [2024-07-10 15:46:09.349886] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.039 [2024-07-10 15:46:09.349911] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c010, cid 3, qid 0 00:22:30.039 [2024-07-10 15:46:09.350032] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.039 [2024-07-10 15:46:09.350044] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.039 [2024-07-10 15:46:09.350051] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.350058] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c010) on tqpair=0x17bbe10 00:22:30.039 [2024-07-10 15:46:09.350075] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.350085] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.350091] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bbe10) 00:22:30.039 [2024-07-10 15:46:09.350101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.039 [2024-07-10 15:46:09.350121] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c010, cid 3, qid 0 00:22:30.039 [2024-07-10 15:46:09.350244] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.039 [2024-07-10 15:46:09.350256] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.039 [2024-07-10 15:46:09.350263] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.350269] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c010) on tqpair=0x17bbe10 00:22:30.039 [2024-07-10 15:46:09.350286] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.350296] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.350303] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bbe10) 00:22:30.039 [2024-07-10 15:46:09.350313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.039 [2024-07-10 15:46:09.350333] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c010, cid 3, qid 0 00:22:30.039 [2024-07-10 15:46:09.350464] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.039 [2024-07-10 15:46:09.350480] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.039 [2024-07-10 15:46:09.350486] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.350493] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c010) on tqpair=0x17bbe10 00:22:30.039 [2024-07-10 15:46:09.350511] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.350521] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.350527] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bbe10) 00:22:30.039 [2024-07-10 15:46:09.350538] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.039 [2024-07-10 15:46:09.350559] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c010, cid 3, qid 0 00:22:30.039 [2024-07-10 15:46:09.350701] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.039 [2024-07-10 15:46:09.350713] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.039 [2024-07-10 15:46:09.350720] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.350727] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c010) on tqpair=0x17bbe10 00:22:30.039 [2024-07-10 15:46:09.350744] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.350754] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.350760] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bbe10) 00:22:30.039 [2024-07-10 15:46:09.350770] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.039 [2024-07-10 15:46:09.350790] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c010, cid 3, qid 0 00:22:30.039 [2024-07-10 15:46:09.350918] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.039 [2024-07-10 15:46:09.350933] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.039 [2024-07-10 15:46:09.350940] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.350947] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c010) on tqpair=0x17bbe10 00:22:30.039 [2024-07-10 15:46:09.350965] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.350975] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.350981] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bbe10) 00:22:30.039 [2024-07-10 15:46:09.350992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.039 [2024-07-10 15:46:09.351012] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c010, cid 3, qid 0 00:22:30.039 [2024-07-10 15:46:09.351145] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.039 [2024-07-10 15:46:09.351157] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.039 [2024-07-10 15:46:09.351164] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.351171] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c010) on tqpair=0x17bbe10 00:22:30.039 [2024-07-10 15:46:09.351188] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.351197] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.351203] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bbe10) 00:22:30.039 [2024-07-10 15:46:09.351214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.039 [2024-07-10 15:46:09.351233] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c010, cid 3, qid 0 00:22:30.039 [2024-07-10 15:46:09.351358] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.039 [2024-07-10 15:46:09.351370] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.039 [2024-07-10 15:46:09.351377] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.351384] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c010) on tqpair=0x17bbe10 00:22:30.039 [2024-07-10 15:46:09.351401] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.351410] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.351417] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17bbe10) 00:22:30.039 [2024-07-10 15:46:09.355435] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.039 [2024-07-10 15:46:09.355465] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x183c010, cid 3, qid 0 00:22:30.039 [2024-07-10 15:46:09.355618] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.039 [2024-07-10 15:46:09.355631] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.039 [2024-07-10 15:46:09.355638] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.039 [2024-07-10 15:46:09.355645] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x183c010) on tqpair=0x17bbe10 00:22:30.039 [2024-07-10 15:46:09.355659] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:22:30.040 00:22:30.040 15:46:09 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:30.040 [2024-07-10 15:46:09.389130] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:30.040 [2024-07-10 15:46:09.389174] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189122 ] 00:22:30.040 EAL: No free 2048 kB hugepages reported on node 1 00:22:30.302 [2024-07-10 15:46:09.421586] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:30.302 [2024-07-10 15:46:09.421633] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:30.302 [2024-07-10 15:46:09.421643] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:30.302 [2024-07-10 15:46:09.421657] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:30.302 [2024-07-10 15:46:09.421669] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:30.302 [2024-07-10 15:46:09.425494] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:30.302 [2024-07-10 15:46:09.425537] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x175ae10 0 00:22:30.302 [2024-07-10 15:46:09.433441] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:30.302 [2024-07-10 15:46:09.433459] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:30.302 [2024-07-10 15:46:09.433467] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:30.302 [2024-07-10 15:46:09.433488] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:30.302 [2024-07-10 15:46:09.433526] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.302 [2024-07-10 15:46:09.433538] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.302 [2024-07-10 15:46:09.433545] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x175ae10) 00:22:30.302 [2024-07-10 15:46:09.433559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:30.302 [2024-07-10 15:46:09.433585] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dabf0, cid 0, qid 0 00:22:30.302 [2024-07-10 15:46:09.440454] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.302 [2024-07-10 15:46:09.440472] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.302 [2024-07-10 15:46:09.440479] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.302 [2024-07-10 15:46:09.440486] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17dabf0) on tqpair=0x175ae10 00:22:30.302 [2024-07-10 15:46:09.440500] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:30.302 [2024-07-10 15:46:09.440510] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:30.302 [2024-07-10 15:46:09.440519] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:30.302 [2024-07-10 15:46:09.440535] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.302 [2024-07-10 15:46:09.440543] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.302 [2024-07-10 15:46:09.440549] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x175ae10) 00:22:30.302 [2024-07-10 15:46:09.440560] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.302 [2024-07-10 15:46:09.440584] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dabf0, cid 0, qid 0 00:22:30.302 [2024-07-10 15:46:09.440752] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.302 [2024-07-10 15:46:09.440765] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.302 [2024-07-10 15:46:09.440771] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.302 [2024-07-10 15:46:09.440778] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17dabf0) on tqpair=0x175ae10 00:22:30.302 [2024-07-10 15:46:09.440792] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:30.302 [2024-07-10 15:46:09.440806] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:30.302 [2024-07-10 15:46:09.440818] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.302 [2024-07-10 15:46:09.440825] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.302 [2024-07-10 15:46:09.440832] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x175ae10) 00:22:30.302 [2024-07-10 15:46:09.440842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.302 [2024-07-10 15:46:09.440864] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dabf0, cid 0, qid 0 00:22:30.302 [2024-07-10 15:46:09.440993] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.302 [2024-07-10 15:46:09.441009] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.302 [2024-07-10 15:46:09.441015] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.302 [2024-07-10 15:46:09.441022] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17dabf0) on tqpair=0x175ae10 00:22:30.302 [2024-07-10 15:46:09.441031] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:30.302 [2024-07-10 15:46:09.441046] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:30.302 [2024-07-10 15:46:09.441058] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.302 [2024-07-10 15:46:09.441065] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.302 [2024-07-10 15:46:09.441071] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x175ae10) 00:22:30.302 [2024-07-10 15:46:09.441082] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.302 [2024-07-10 15:46:09.441103] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dabf0, cid 0, qid 0 00:22:30.302 [2024-07-10 15:46:09.441225] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.302 [2024-07-10 15:46:09.441240] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.302 [2024-07-10 15:46:09.441246] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.302 [2024-07-10 15:46:09.441253] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17dabf0) on tqpair=0x175ae10 00:22:30.302 [2024-07-10 15:46:09.441263] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:30.302 [2024-07-10 15:46:09.441280] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.302 [2024-07-10 15:46:09.441289] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.302 [2024-07-10 15:46:09.441295] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x175ae10) 00:22:30.302 [2024-07-10 15:46:09.441306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.302 [2024-07-10 15:46:09.441327] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dabf0, cid 0, qid 0 00:22:30.302 [2024-07-10 15:46:09.441451] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.302 [2024-07-10 15:46:09.441466] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.302 [2024-07-10 15:46:09.441473] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.302 [2024-07-10 15:46:09.441479] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17dabf0) on tqpair=0x175ae10 00:22:30.302 [2024-07-10 15:46:09.441488] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:30.302 [2024-07-10 15:46:09.441496] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:30.302 [2024-07-10 15:46:09.441514] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:30.302 [2024-07-10 15:46:09.441624] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:30.302 [2024-07-10 15:46:09.441631] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:30.302 [2024-07-10 15:46:09.441643] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.302 [2024-07-10 15:46:09.441666] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.302 [2024-07-10 15:46:09.441672] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x175ae10) 00:22:30.302 [2024-07-10 15:46:09.441683] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.302 [2024-07-10 15:46:09.441704] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dabf0, cid 0, qid 0 00:22:30.302 [2024-07-10 15:46:09.441879] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.303 [2024-07-10 15:46:09.441895] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.303 [2024-07-10 15:46:09.441902] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.441909] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17dabf0) on tqpair=0x175ae10 00:22:30.303 [2024-07-10 15:46:09.441918] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:30.303 [2024-07-10 15:46:09.441935] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.441944] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.441950] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x175ae10) 00:22:30.303 [2024-07-10 15:46:09.441961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.303 [2024-07-10 15:46:09.441982] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dabf0, cid 0, qid 0 00:22:30.303 [2024-07-10 15:46:09.442110] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.303 [2024-07-10 15:46:09.442125] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.303 [2024-07-10 15:46:09.442132] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.442139] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17dabf0) on tqpair=0x175ae10 00:22:30.303 [2024-07-10 15:46:09.442148] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:30.303 [2024-07-10 15:46:09.442156] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:30.303 [2024-07-10 15:46:09.442170] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:30.303 [2024-07-10 15:46:09.442184] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:30.303 [2024-07-10 15:46:09.442197] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.442205] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.442211] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x175ae10) 00:22:30.303 [2024-07-10 15:46:09.442221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.303 [2024-07-10 15:46:09.442243] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dabf0, cid 0, qid 0 00:22:30.303 [2024-07-10 15:46:09.442472] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:30.303 [2024-07-10 15:46:09.442491] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:30.303 [2024-07-10 15:46:09.442498] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.442505] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x175ae10): datao=0, datal=4096, cccid=0 00:22:30.303 [2024-07-10 15:46:09.442512] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17dabf0) on tqpair(0x175ae10): expected_datao=0, payload_size=4096 00:22:30.303 [2024-07-10 15:46:09.442531] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.442540] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.488439] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.303 [2024-07-10 15:46:09.488457] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.303 [2024-07-10 15:46:09.488465] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.488471] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17dabf0) on tqpair=0x175ae10 00:22:30.303 [2024-07-10 15:46:09.488484] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:30.303 [2024-07-10 15:46:09.488492] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:30.303 [2024-07-10 15:46:09.488500] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:30.303 [2024-07-10 15:46:09.488506] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:30.303 [2024-07-10 15:46:09.488513] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:30.303 [2024-07-10 15:46:09.488521] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:30.303 [2024-07-10 15:46:09.488540] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:30.303 [2024-07-10 15:46:09.488553] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.488560] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.488567] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x175ae10) 00:22:30.303 [2024-07-10 15:46:09.488578] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:30.303 [2024-07-10 15:46:09.488601] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dabf0, cid 0, qid 0 00:22:30.303 [2024-07-10 15:46:09.488758] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.303 [2024-07-10 15:46:09.488770] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.303 [2024-07-10 15:46:09.488777] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.488784] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17dabf0) on tqpair=0x175ae10 00:22:30.303 [2024-07-10 15:46:09.488795] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.488803] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.488809] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x175ae10) 00:22:30.303 [2024-07-10 15:46:09.488819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.303 [2024-07-10 15:46:09.488828] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.488835] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.488841] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x175ae10) 00:22:30.303 [2024-07-10 15:46:09.488850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.303 [2024-07-10 15:46:09.488864] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.488871] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.488877] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x175ae10) 00:22:30.303 [2024-07-10 15:46:09.488886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.303 [2024-07-10 15:46:09.488896] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.488902] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.488908] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x175ae10) 00:22:30.303 [2024-07-10 15:46:09.488932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.303 [2024-07-10 15:46:09.488941] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:30.303 [2024-07-10 15:46:09.488959] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:30.303 [2024-07-10 15:46:09.488971] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.488977] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.488983] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x175ae10) 00:22:30.303 [2024-07-10 15:46:09.489008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.303 [2024-07-10 15:46:09.489031] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dabf0, cid 0, qid 0 00:22:30.303 [2024-07-10 15:46:09.489042] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dad50, cid 1, qid 0 00:22:30.303 [2024-07-10 15:46:09.489049] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17daeb0, cid 2, qid 0 00:22:30.303 [2024-07-10 15:46:09.489071] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db010, cid 3, qid 0 00:22:30.303 [2024-07-10 15:46:09.489079] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db170, cid 4, qid 0 00:22:30.303 [2024-07-10 15:46:09.489280] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.303 [2024-07-10 15:46:09.489294] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.303 [2024-07-10 15:46:09.489300] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.489307] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db170) on tqpair=0x175ae10 00:22:30.303 [2024-07-10 15:46:09.489316] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:30.303 [2024-07-10 15:46:09.489325] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:30.303 [2024-07-10 15:46:09.489338] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:30.303 [2024-07-10 15:46:09.489354] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:30.303 [2024-07-10 15:46:09.489365] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.489388] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.489394] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x175ae10) 00:22:30.303 [2024-07-10 15:46:09.489405] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:30.303 [2024-07-10 15:46:09.489446] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db170, cid 4, qid 0 00:22:30.303 [2024-07-10 15:46:09.489608] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.303 [2024-07-10 15:46:09.489628] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.303 [2024-07-10 15:46:09.489635] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.489642] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db170) on tqpair=0x175ae10 00:22:30.303 [2024-07-10 15:46:09.489708] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:30.303 [2024-07-10 15:46:09.489727] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:30.303 [2024-07-10 15:46:09.489741] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.489748] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.489755] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x175ae10) 00:22:30.303 [2024-07-10 15:46:09.489781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.303 [2024-07-10 15:46:09.489803] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db170, cid 4, qid 0 00:22:30.303 [2024-07-10 15:46:09.490006] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:30.303 [2024-07-10 15:46:09.490022] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:30.303 [2024-07-10 15:46:09.490029] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:30.303 [2024-07-10 15:46:09.490035] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x175ae10): datao=0, datal=4096, cccid=4 00:22:30.303 [2024-07-10 15:46:09.490043] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17db170) on tqpair(0x175ae10): expected_datao=0, payload_size=4096 00:22:30.303 [2024-07-10 15:46:09.490054] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.490062] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.490089] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.304 [2024-07-10 15:46:09.490099] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.304 [2024-07-10 15:46:09.490106] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.490112] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db170) on tqpair=0x175ae10 00:22:30.304 [2024-07-10 15:46:09.490133] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:30.304 [2024-07-10 15:46:09.490153] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:30.304 [2024-07-10 15:46:09.490170] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:30.304 [2024-07-10 15:46:09.490184] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.490191] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.490197] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x175ae10) 00:22:30.304 [2024-07-10 15:46:09.490208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.304 [2024-07-10 15:46:09.490229] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db170, cid 4, qid 0 00:22:30.304 [2024-07-10 15:46:09.490395] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:30.304 [2024-07-10 15:46:09.490411] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:30.304 [2024-07-10 15:46:09.490417] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.490430] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x175ae10): datao=0, datal=4096, cccid=4 00:22:30.304 [2024-07-10 15:46:09.490438] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17db170) on tqpair(0x175ae10): expected_datao=0, payload_size=4096 00:22:30.304 [2024-07-10 15:46:09.490454] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.490462] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.490486] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.304 [2024-07-10 15:46:09.490497] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.304 [2024-07-10 15:46:09.490503] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.490510] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db170) on tqpair=0x175ae10 00:22:30.304 [2024-07-10 15:46:09.490533] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:30.304 [2024-07-10 15:46:09.490552] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:30.304 [2024-07-10 15:46:09.490566] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.490573] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.490580] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x175ae10) 00:22:30.304 [2024-07-10 15:46:09.490590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.304 [2024-07-10 15:46:09.490611] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db170, cid 4, qid 0 00:22:30.304 [2024-07-10 15:46:09.490752] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:30.304 [2024-07-10 15:46:09.490768] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:30.304 [2024-07-10 15:46:09.490774] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.490781] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x175ae10): datao=0, datal=4096, cccid=4 00:22:30.304 [2024-07-10 15:46:09.490788] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17db170) on tqpair(0x175ae10): expected_datao=0, payload_size=4096 00:22:30.304 [2024-07-10 15:46:09.490799] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.490807] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.490834] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.304 [2024-07-10 15:46:09.490845] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.304 [2024-07-10 15:46:09.490851] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.490858] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db170) on tqpair=0x175ae10 00:22:30.304 [2024-07-10 15:46:09.490872] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:30.304 [2024-07-10 15:46:09.490887] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:30.304 [2024-07-10 15:46:09.490902] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:30.304 [2024-07-10 15:46:09.490912] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:30.304 [2024-07-10 15:46:09.490921] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:30.304 [2024-07-10 15:46:09.490929] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:30.304 [2024-07-10 15:46:09.490937] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:30.304 [2024-07-10 15:46:09.490946] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:30.304 [2024-07-10 15:46:09.490968] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.490977] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.490984] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x175ae10) 00:22:30.304 [2024-07-10 15:46:09.490994] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.304 [2024-07-10 15:46:09.491005] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.491012] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.491018] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x175ae10) 00:22:30.304 [2024-07-10 15:46:09.491041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:30.304 [2024-07-10 15:46:09.491067] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db170, cid 4, qid 0 00:22:30.304 [2024-07-10 15:46:09.491079] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db2d0, cid 5, qid 0 00:22:30.304 [2024-07-10 15:46:09.491274] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.304 [2024-07-10 15:46:09.491290] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.304 [2024-07-10 15:46:09.491297] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.491303] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db170) on tqpair=0x175ae10 00:22:30.304 [2024-07-10 15:46:09.491314] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.304 [2024-07-10 15:46:09.491323] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.304 [2024-07-10 15:46:09.491329] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.491336] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db2d0) on tqpair=0x175ae10 00:22:30.304 [2024-07-10 15:46:09.491353] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.491362] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.491368] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x175ae10) 00:22:30.304 [2024-07-10 15:46:09.491378] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.304 [2024-07-10 15:46:09.491399] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db2d0, cid 5, qid 0 00:22:30.304 [2024-07-10 15:46:09.491630] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.304 [2024-07-10 15:46:09.491644] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.304 [2024-07-10 15:46:09.491651] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.491657] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db2d0) on tqpair=0x175ae10 00:22:30.304 [2024-07-10 15:46:09.491674] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.491683] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.491689] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x175ae10) 00:22:30.304 [2024-07-10 15:46:09.491699] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.304 [2024-07-10 15:46:09.491721] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db2d0, cid 5, qid 0 00:22:30.304 [2024-07-10 15:46:09.491852] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.304 [2024-07-10 15:46:09.491868] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.304 [2024-07-10 15:46:09.491875] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.491881] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db2d0) on tqpair=0x175ae10 00:22:30.304 [2024-07-10 15:46:09.491902] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.491912] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.491918] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x175ae10) 00:22:30.304 [2024-07-10 15:46:09.491928] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.304 [2024-07-10 15:46:09.491949] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db2d0, cid 5, qid 0 00:22:30.304 [2024-07-10 15:46:09.492070] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.304 [2024-07-10 15:46:09.492082] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.304 [2024-07-10 15:46:09.492089] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.492096] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db2d0) on tqpair=0x175ae10 00:22:30.304 [2024-07-10 15:46:09.492116] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.492125] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.492132] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x175ae10) 00:22:30.304 [2024-07-10 15:46:09.492142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.304 [2024-07-10 15:46:09.492154] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.492161] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.492167] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x175ae10) 00:22:30.304 [2024-07-10 15:46:09.492176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.304 [2024-07-10 15:46:09.492188] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.492195] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.304 [2024-07-10 15:46:09.492201] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x175ae10) 00:22:30.304 [2024-07-10 15:46:09.492210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.304 [2024-07-10 15:46:09.492222] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.305 [2024-07-10 15:46:09.492245] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.305 [2024-07-10 15:46:09.492251] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x175ae10) 00:22:30.305 [2024-07-10 15:46:09.492260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.305 [2024-07-10 15:46:09.492282] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db2d0, cid 5, qid 0 00:22:30.305 [2024-07-10 15:46:09.492293] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db170, cid 4, qid 0 00:22:30.305 [2024-07-10 15:46:09.492318] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db430, cid 6, qid 0 00:22:30.305 [2024-07-10 15:46:09.492326] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db590, cid 7, qid 0 00:22:30.305 [2024-07-10 15:46:09.496443] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:30.305 [2024-07-10 15:46:09.496459] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:30.305 [2024-07-10 15:46:09.496465] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:30.305 [2024-07-10 15:46:09.496472] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x175ae10): datao=0, datal=8192, cccid=5 00:22:30.305 [2024-07-10 15:46:09.496479] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17db2d0) on tqpair(0x175ae10): expected_datao=0, payload_size=8192 00:22:30.305 [2024-07-10 15:46:09.496494] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:30.305 [2024-07-10 15:46:09.496502] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:30.305 [2024-07-10 15:46:09.496510] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:30.305 [2024-07-10 15:46:09.496518] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:30.305 [2024-07-10 15:46:09.496525] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:30.305 [2024-07-10 15:46:09.496530] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x175ae10): datao=0, datal=512, cccid=4 00:22:30.305 [2024-07-10 15:46:09.496538] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17db170) on tqpair(0x175ae10): expected_datao=0, payload_size=512 00:22:30.305 [2024-07-10 15:46:09.496547] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:30.305 [2024-07-10 15:46:09.496554] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:30.305 [2024-07-10 15:46:09.496562] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:30.305 [2024-07-10 15:46:09.496570] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:30.305 [2024-07-10 15:46:09.496577] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:30.305 [2024-07-10 15:46:09.496583] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x175ae10): datao=0, datal=512, cccid=6 00:22:30.305 [2024-07-10 15:46:09.496590] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17db430) on tqpair(0x175ae10): expected_datao=0, payload_size=512 00:22:30.305 [2024-07-10 15:46:09.496600] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:30.305 [2024-07-10 15:46:09.496607] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:30.305 [2024-07-10 15:46:09.496615] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:30.305 [2024-07-10 15:46:09.496623] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:30.305 [2024-07-10 15:46:09.496629] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:30.305 [2024-07-10 15:46:09.496635] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x175ae10): datao=0, datal=4096, cccid=7 00:22:30.305 [2024-07-10 15:46:09.496642] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17db590) on tqpair(0x175ae10): expected_datao=0, payload_size=4096 00:22:30.305 [2024-07-10 15:46:09.496652] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:30.305 [2024-07-10 15:46:09.496659] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:30.305 [2024-07-10 15:46:09.496667] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.305 [2024-07-10 15:46:09.496675] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.305 [2024-07-10 15:46:09.496681] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.305 [2024-07-10 15:46:09.496688] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db2d0) on tqpair=0x175ae10 00:22:30.305 [2024-07-10 15:46:09.496722] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.305 [2024-07-10 15:46:09.496733] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.305 [2024-07-10 15:46:09.496739] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.305 [2024-07-10 15:46:09.496745] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db170) on tqpair=0x175ae10 00:22:30.305 [2024-07-10 15:46:09.496759] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.305 [2024-07-10 15:46:09.496768] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.305 [2024-07-10 15:46:09.496774] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.305 [2024-07-10 15:46:09.496780] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db430) on tqpair=0x175ae10 00:22:30.305 [2024-07-10 15:46:09.496791] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.305 [2024-07-10 15:46:09.496800] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.305 [2024-07-10 15:46:09.496806] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.305 [2024-07-10 15:46:09.496814] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db590) on tqpair=0x175ae10 00:22:30.305 ===================================================== 00:22:30.305 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:30.305 ===================================================== 00:22:30.305 Controller Capabilities/Features 00:22:30.305 ================================ 00:22:30.305 Vendor ID: 8086 00:22:30.305 Subsystem Vendor ID: 8086 00:22:30.305 Serial Number: SPDK00000000000001 00:22:30.305 Model Number: SPDK bdev Controller 00:22:30.305 Firmware Version: 24.01.1 00:22:30.305 Recommended Arb Burst: 6 00:22:30.305 IEEE OUI Identifier: e4 d2 5c 00:22:30.305 Multi-path I/O 00:22:30.305 May have multiple subsystem ports: Yes 00:22:30.305 May have multiple controllers: Yes 00:22:30.305 Associated with SR-IOV VF: No 00:22:30.305 Max Data Transfer Size: 131072 00:22:30.305 Max Number of Namespaces: 32 00:22:30.305 Max Number of I/O Queues: 127 00:22:30.305 NVMe Specification Version (VS): 1.3 00:22:30.305 NVMe Specification Version (Identify): 1.3 00:22:30.305 Maximum Queue Entries: 128 00:22:30.305 Contiguous Queues Required: Yes 00:22:30.305 Arbitration Mechanisms Supported 00:22:30.305 Weighted Round Robin: Not Supported 00:22:30.305 Vendor Specific: Not Supported 00:22:30.305 Reset Timeout: 15000 ms 00:22:30.305 Doorbell Stride: 4 bytes 00:22:30.305 NVM Subsystem Reset: Not Supported 00:22:30.305 Command Sets Supported 00:22:30.305 NVM Command Set: Supported 00:22:30.305 Boot Partition: Not Supported 00:22:30.305 Memory Page Size Minimum: 4096 bytes 00:22:30.305 Memory Page Size Maximum: 4096 bytes 00:22:30.305 Persistent Memory Region: Not Supported 00:22:30.305 Optional Asynchronous Events Supported 00:22:30.305 Namespace Attribute Notices: Supported 00:22:30.305 Firmware Activation Notices: Not Supported 00:22:30.305 ANA Change Notices: Not Supported 00:22:30.305 PLE Aggregate Log Change Notices: Not Supported 00:22:30.305 LBA Status Info Alert Notices: Not Supported 00:22:30.305 EGE Aggregate Log Change Notices: Not Supported 00:22:30.305 Normal NVM Subsystem Shutdown event: Not Supported 00:22:30.305 Zone Descriptor Change Notices: Not Supported 00:22:30.305 Discovery Log Change Notices: Not Supported 00:22:30.305 Controller Attributes 00:22:30.305 128-bit Host Identifier: Supported 00:22:30.305 Non-Operational Permissive Mode: Not Supported 00:22:30.305 NVM Sets: Not Supported 00:22:30.305 Read Recovery Levels: Not Supported 00:22:30.305 Endurance Groups: Not Supported 00:22:30.305 Predictable Latency Mode: Not Supported 00:22:30.305 Traffic Based Keep ALive: Not Supported 00:22:30.305 Namespace Granularity: Not Supported 00:22:30.305 SQ Associations: Not Supported 00:22:30.305 UUID List: Not Supported 00:22:30.305 Multi-Domain Subsystem: Not Supported 00:22:30.305 Fixed Capacity Management: Not Supported 00:22:30.305 Variable Capacity Management: Not Supported 00:22:30.305 Delete Endurance Group: Not Supported 00:22:30.305 Delete NVM Set: Not Supported 00:22:30.305 Extended LBA Formats Supported: Not Supported 00:22:30.305 Flexible Data Placement Supported: Not Supported 00:22:30.305 00:22:30.305 Controller Memory Buffer Support 00:22:30.305 ================================ 00:22:30.305 Supported: No 00:22:30.305 00:22:30.305 Persistent Memory Region Support 00:22:30.305 ================================ 00:22:30.305 Supported: No 00:22:30.305 00:22:30.305 Admin Command Set Attributes 00:22:30.305 ============================ 00:22:30.305 Security Send/Receive: Not Supported 00:22:30.305 Format NVM: Not Supported 00:22:30.305 Firmware Activate/Download: Not Supported 00:22:30.305 Namespace Management: Not Supported 00:22:30.305 Device Self-Test: Not Supported 00:22:30.305 Directives: Not Supported 00:22:30.305 NVMe-MI: Not Supported 00:22:30.305 Virtualization Management: Not Supported 00:22:30.305 Doorbell Buffer Config: Not Supported 00:22:30.305 Get LBA Status Capability: Not Supported 00:22:30.305 Command & Feature Lockdown Capability: Not Supported 00:22:30.305 Abort Command Limit: 4 00:22:30.305 Async Event Request Limit: 4 00:22:30.305 Number of Firmware Slots: N/A 00:22:30.305 Firmware Slot 1 Read-Only: N/A 00:22:30.305 Firmware Activation Without Reset: N/A 00:22:30.305 Multiple Update Detection Support: N/A 00:22:30.305 Firmware Update Granularity: No Information Provided 00:22:30.305 Per-Namespace SMART Log: No 00:22:30.305 Asymmetric Namespace Access Log Page: Not Supported 00:22:30.305 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:30.305 Command Effects Log Page: Supported 00:22:30.305 Get Log Page Extended Data: Supported 00:22:30.305 Telemetry Log Pages: Not Supported 00:22:30.305 Persistent Event Log Pages: Not Supported 00:22:30.305 Supported Log Pages Log Page: May Support 00:22:30.305 Commands Supported & Effects Log Page: Not Supported 00:22:30.305 Feature Identifiers & Effects Log Page:May Support 00:22:30.305 NVMe-MI Commands & Effects Log Page: May Support 00:22:30.305 Data Area 4 for Telemetry Log: Not Supported 00:22:30.305 Error Log Page Entries Supported: 128 00:22:30.305 Keep Alive: Supported 00:22:30.305 Keep Alive Granularity: 10000 ms 00:22:30.305 00:22:30.305 NVM Command Set Attributes 00:22:30.305 ========================== 00:22:30.305 Submission Queue Entry Size 00:22:30.305 Max: 64 00:22:30.306 Min: 64 00:22:30.306 Completion Queue Entry Size 00:22:30.306 Max: 16 00:22:30.306 Min: 16 00:22:30.306 Number of Namespaces: 32 00:22:30.306 Compare Command: Supported 00:22:30.306 Write Uncorrectable Command: Not Supported 00:22:30.306 Dataset Management Command: Supported 00:22:30.306 Write Zeroes Command: Supported 00:22:30.306 Set Features Save Field: Not Supported 00:22:30.306 Reservations: Supported 00:22:30.306 Timestamp: Not Supported 00:22:30.306 Copy: Supported 00:22:30.306 Volatile Write Cache: Present 00:22:30.306 Atomic Write Unit (Normal): 1 00:22:30.306 Atomic Write Unit (PFail): 1 00:22:30.306 Atomic Compare & Write Unit: 1 00:22:30.306 Fused Compare & Write: Supported 00:22:30.306 Scatter-Gather List 00:22:30.306 SGL Command Set: Supported 00:22:30.306 SGL Keyed: Supported 00:22:30.306 SGL Bit Bucket Descriptor: Not Supported 00:22:30.306 SGL Metadata Pointer: Not Supported 00:22:30.306 Oversized SGL: Not Supported 00:22:30.306 SGL Metadata Address: Not Supported 00:22:30.306 SGL Offset: Supported 00:22:30.306 Transport SGL Data Block: Not Supported 00:22:30.306 Replay Protected Memory Block: Not Supported 00:22:30.306 00:22:30.306 Firmware Slot Information 00:22:30.306 ========================= 00:22:30.306 Active slot: 1 00:22:30.306 Slot 1 Firmware Revision: 24.01.1 00:22:30.306 00:22:30.306 00:22:30.306 Commands Supported and Effects 00:22:30.306 ============================== 00:22:30.306 Admin Commands 00:22:30.306 -------------- 00:22:30.306 Get Log Page (02h): Supported 00:22:30.306 Identify (06h): Supported 00:22:30.306 Abort (08h): Supported 00:22:30.306 Set Features (09h): Supported 00:22:30.306 Get Features (0Ah): Supported 00:22:30.306 Asynchronous Event Request (0Ch): Supported 00:22:30.306 Keep Alive (18h): Supported 00:22:30.306 I/O Commands 00:22:30.306 ------------ 00:22:30.306 Flush (00h): Supported LBA-Change 00:22:30.306 Write (01h): Supported LBA-Change 00:22:30.306 Read (02h): Supported 00:22:30.306 Compare (05h): Supported 00:22:30.306 Write Zeroes (08h): Supported LBA-Change 00:22:30.306 Dataset Management (09h): Supported LBA-Change 00:22:30.306 Copy (19h): Supported LBA-Change 00:22:30.306 Unknown (79h): Supported LBA-Change 00:22:30.306 Unknown (7Ah): Supported 00:22:30.306 00:22:30.306 Error Log 00:22:30.306 ========= 00:22:30.306 00:22:30.306 Arbitration 00:22:30.306 =========== 00:22:30.306 Arbitration Burst: 1 00:22:30.306 00:22:30.306 Power Management 00:22:30.306 ================ 00:22:30.306 Number of Power States: 1 00:22:30.306 Current Power State: Power State #0 00:22:30.306 Power State #0: 00:22:30.306 Max Power: 0.00 W 00:22:30.306 Non-Operational State: Operational 00:22:30.306 Entry Latency: Not Reported 00:22:30.306 Exit Latency: Not Reported 00:22:30.306 Relative Read Throughput: 0 00:22:30.306 Relative Read Latency: 0 00:22:30.306 Relative Write Throughput: 0 00:22:30.306 Relative Write Latency: 0 00:22:30.306 Idle Power: Not Reported 00:22:30.306 Active Power: Not Reported 00:22:30.306 Non-Operational Permissive Mode: Not Supported 00:22:30.306 00:22:30.306 Health Information 00:22:30.306 ================== 00:22:30.306 Critical Warnings: 00:22:30.306 Available Spare Space: OK 00:22:30.306 Temperature: OK 00:22:30.306 Device Reliability: OK 00:22:30.306 Read Only: No 00:22:30.306 Volatile Memory Backup: OK 00:22:30.306 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:30.306 Temperature Threshold: [2024-07-10 15:46:09.496928] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.306 [2024-07-10 15:46:09.496939] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.306 [2024-07-10 15:46:09.496945] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x175ae10) 00:22:30.306 [2024-07-10 15:46:09.496955] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.306 [2024-07-10 15:46:09.496977] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db590, cid 7, qid 0 00:22:30.306 [2024-07-10 15:46:09.497215] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.306 [2024-07-10 15:46:09.497228] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.306 [2024-07-10 15:46:09.497235] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.306 [2024-07-10 15:46:09.497241] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db590) on tqpair=0x175ae10 00:22:30.306 [2024-07-10 15:46:09.497281] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:30.306 [2024-07-10 15:46:09.497302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.306 [2024-07-10 15:46:09.497313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.306 [2024-07-10 15:46:09.497323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.306 [2024-07-10 15:46:09.497332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:30.306 [2024-07-10 15:46:09.497345] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.306 [2024-07-10 15:46:09.497353] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.306 [2024-07-10 15:46:09.497359] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x175ae10) 00:22:30.306 [2024-07-10 15:46:09.497384] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.306 [2024-07-10 15:46:09.497407] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db010, cid 3, qid 0 00:22:30.306 [2024-07-10 15:46:09.497591] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.306 [2024-07-10 15:46:09.497606] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.306 [2024-07-10 15:46:09.497613] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.306 [2024-07-10 15:46:09.497619] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db010) on tqpair=0x175ae10 00:22:30.306 [2024-07-10 15:46:09.497631] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.306 [2024-07-10 15:46:09.497639] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.306 [2024-07-10 15:46:09.497646] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x175ae10) 00:22:30.306 [2024-07-10 15:46:09.497656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.306 [2024-07-10 15:46:09.497682] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db010, cid 3, qid 0 00:22:30.306 [2024-07-10 15:46:09.497819] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.306 [2024-07-10 15:46:09.497831] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.306 [2024-07-10 15:46:09.497838] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.306 [2024-07-10 15:46:09.497844] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db010) on tqpair=0x175ae10 00:22:30.306 [2024-07-10 15:46:09.497853] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:30.306 [2024-07-10 15:46:09.497861] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:30.306 [2024-07-10 15:46:09.497881] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.306 [2024-07-10 15:46:09.497891] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.306 [2024-07-10 15:46:09.497897] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x175ae10) 00:22:30.306 [2024-07-10 15:46:09.497908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.306 [2024-07-10 15:46:09.497928] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db010, cid 3, qid 0 00:22:30.306 [2024-07-10 15:46:09.498058] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.306 [2024-07-10 15:46:09.498074] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.306 [2024-07-10 15:46:09.498080] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.306 [2024-07-10 15:46:09.498087] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db010) on tqpair=0x175ae10 00:22:30.307 [2024-07-10 15:46:09.498104] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.498114] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.498120] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x175ae10) 00:22:30.307 [2024-07-10 15:46:09.498130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.307 [2024-07-10 15:46:09.498151] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db010, cid 3, qid 0 00:22:30.307 [2024-07-10 15:46:09.498272] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.307 [2024-07-10 15:46:09.498283] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.307 [2024-07-10 15:46:09.498290] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.498296] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db010) on tqpair=0x175ae10 00:22:30.307 [2024-07-10 15:46:09.498313] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.498322] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.498329] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x175ae10) 00:22:30.307 [2024-07-10 15:46:09.498339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.307 [2024-07-10 15:46:09.498359] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db010, cid 3, qid 0 00:22:30.307 [2024-07-10 15:46:09.498492] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.307 [2024-07-10 15:46:09.498507] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.307 [2024-07-10 15:46:09.498514] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.498520] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db010) on tqpair=0x175ae10 00:22:30.307 [2024-07-10 15:46:09.498538] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.498547] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.498553] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x175ae10) 00:22:30.307 [2024-07-10 15:46:09.498564] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.307 [2024-07-10 15:46:09.498584] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db010, cid 3, qid 0 00:22:30.307 [2024-07-10 15:46:09.498711] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.307 [2024-07-10 15:46:09.498723] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.307 [2024-07-10 15:46:09.498729] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.498736] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db010) on tqpair=0x175ae10 00:22:30.307 [2024-07-10 15:46:09.498757] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.498767] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.498773] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x175ae10) 00:22:30.307 [2024-07-10 15:46:09.498783] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.307 [2024-07-10 15:46:09.498804] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db010, cid 3, qid 0 00:22:30.307 [2024-07-10 15:46:09.498927] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.307 [2024-07-10 15:46:09.498943] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.307 [2024-07-10 15:46:09.498949] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.498956] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db010) on tqpair=0x175ae10 00:22:30.307 [2024-07-10 15:46:09.498973] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.498982] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.498989] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x175ae10) 00:22:30.307 [2024-07-10 15:46:09.498999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.307 [2024-07-10 15:46:09.499019] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db010, cid 3, qid 0 00:22:30.307 [2024-07-10 15:46:09.499141] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.307 [2024-07-10 15:46:09.499153] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.307 [2024-07-10 15:46:09.499160] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.499166] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db010) on tqpair=0x175ae10 00:22:30.307 [2024-07-10 15:46:09.499183] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.499192] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.499198] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x175ae10) 00:22:30.307 [2024-07-10 15:46:09.499208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.307 [2024-07-10 15:46:09.499229] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db010, cid 3, qid 0 00:22:30.307 [2024-07-10 15:46:09.499351] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.307 [2024-07-10 15:46:09.499363] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.307 [2024-07-10 15:46:09.499369] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.499376] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db010) on tqpair=0x175ae10 00:22:30.307 [2024-07-10 15:46:09.499393] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.499402] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.499408] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x175ae10) 00:22:30.307 [2024-07-10 15:46:09.499419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.307 [2024-07-10 15:46:09.499446] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db010, cid 3, qid 0 00:22:30.307 [2024-07-10 15:46:09.499571] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.307 [2024-07-10 15:46:09.499586] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.307 [2024-07-10 15:46:09.499593] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.499600] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db010) on tqpair=0x175ae10 00:22:30.307 [2024-07-10 15:46:09.499617] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.499630] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.499637] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x175ae10) 00:22:30.307 [2024-07-10 15:46:09.499647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.307 [2024-07-10 15:46:09.499668] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db010, cid 3, qid 0 00:22:30.307 [2024-07-10 15:46:09.499787] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.307 [2024-07-10 15:46:09.499799] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.307 [2024-07-10 15:46:09.499805] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.499812] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db010) on tqpair=0x175ae10 00:22:30.307 [2024-07-10 15:46:09.499829] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.499838] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.499844] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x175ae10) 00:22:30.307 [2024-07-10 15:46:09.499854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.307 [2024-07-10 15:46:09.499875] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db010, cid 3, qid 0 00:22:30.307 [2024-07-10 15:46:09.500000] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.307 [2024-07-10 15:46:09.500015] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.307 [2024-07-10 15:46:09.500022] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.500029] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db010) on tqpair=0x175ae10 00:22:30.307 [2024-07-10 15:46:09.500046] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.500055] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.500062] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x175ae10) 00:22:30.307 [2024-07-10 15:46:09.500072] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.307 [2024-07-10 15:46:09.500093] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db010, cid 3, qid 0 00:22:30.307 [2024-07-10 15:46:09.500217] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.307 [2024-07-10 15:46:09.500229] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.307 [2024-07-10 15:46:09.500236] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.500242] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db010) on tqpair=0x175ae10 00:22:30.307 [2024-07-10 15:46:09.500259] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.500268] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.500274] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x175ae10) 00:22:30.307 [2024-07-10 15:46:09.500285] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.307 [2024-07-10 15:46:09.500305] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db010, cid 3, qid 0 00:22:30.307 [2024-07-10 15:46:09.504437] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.307 [2024-07-10 15:46:09.504456] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.307 [2024-07-10 15:46:09.504463] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.504470] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db010) on tqpair=0x175ae10 00:22:30.307 [2024-07-10 15:46:09.504489] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.504498] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.504508] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x175ae10) 00:22:30.307 [2024-07-10 15:46:09.504520] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:30.307 [2024-07-10 15:46:09.504542] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17db010, cid 3, qid 0 00:22:30.307 [2024-07-10 15:46:09.504698] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:30.307 [2024-07-10 15:46:09.504710] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:30.307 [2024-07-10 15:46:09.504716] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:30.307 [2024-07-10 15:46:09.504723] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x17db010) on tqpair=0x175ae10 00:22:30.307 [2024-07-10 15:46:09.504737] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:22:30.307 0 Kelvin (-273 Celsius) 00:22:30.307 Available Spare: 0% 00:22:30.307 Available Spare Threshold: 0% 00:22:30.307 Life Percentage Used: 0% 00:22:30.307 Data Units Read: 0 00:22:30.307 Data Units Written: 0 00:22:30.307 Host Read Commands: 0 00:22:30.307 Host Write Commands: 0 00:22:30.307 Controller Busy Time: 0 minutes 00:22:30.307 Power Cycles: 0 00:22:30.307 Power On Hours: 0 hours 00:22:30.308 Unsafe Shutdowns: 0 00:22:30.308 Unrecoverable Media Errors: 0 00:22:30.308 Lifetime Error Log Entries: 0 00:22:30.308 Warning Temperature Time: 0 minutes 00:22:30.308 Critical Temperature Time: 0 minutes 00:22:30.308 00:22:30.308 Number of Queues 00:22:30.308 ================ 00:22:30.308 Number of I/O Submission Queues: 127 00:22:30.308 Number of I/O Completion Queues: 127 00:22:30.308 00:22:30.308 Active Namespaces 00:22:30.308 ================= 00:22:30.308 Namespace ID:1 00:22:30.308 Error Recovery Timeout: Unlimited 00:22:30.308 Command Set Identifier: NVM (00h) 00:22:30.308 Deallocate: Supported 00:22:30.308 Deallocated/Unwritten Error: Not Supported 00:22:30.308 Deallocated Read Value: Unknown 00:22:30.308 Deallocate in Write Zeroes: Not Supported 00:22:30.308 Deallocated Guard Field: 0xFFFF 00:22:30.308 Flush: Supported 00:22:30.308 Reservation: Supported 00:22:30.308 Namespace Sharing Capabilities: Multiple Controllers 00:22:30.308 Size (in LBAs): 131072 (0GiB) 00:22:30.308 Capacity (in LBAs): 131072 (0GiB) 00:22:30.308 Utilization (in LBAs): 131072 (0GiB) 00:22:30.308 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:30.308 EUI64: ABCDEF0123456789 00:22:30.308 UUID: 41793530-bd02-4c7a-9dbb-fecbe0d418ed 00:22:30.308 Thin Provisioning: Not Supported 00:22:30.308 Per-NS Atomic Units: Yes 00:22:30.308 Atomic Boundary Size (Normal): 0 00:22:30.308 Atomic Boundary Size (PFail): 0 00:22:30.308 Atomic Boundary Offset: 0 00:22:30.308 Maximum Single Source Range Length: 65535 00:22:30.308 Maximum Copy Length: 65535 00:22:30.308 Maximum Source Range Count: 1 00:22:30.308 NGUID/EUI64 Never Reused: No 00:22:30.308 Namespace Write Protected: No 00:22:30.308 Number of LBA Formats: 1 00:22:30.308 Current LBA Format: LBA Format #00 00:22:30.308 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:30.308 00:22:30.308 15:46:09 -- host/identify.sh@51 -- # sync 00:22:30.308 15:46:09 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:30.308 15:46:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:30.308 15:46:09 -- common/autotest_common.sh@10 -- # set +x 00:22:30.308 15:46:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:30.308 15:46:09 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:30.308 15:46:09 -- host/identify.sh@56 -- # nvmftestfini 00:22:30.308 15:46:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:30.308 15:46:09 -- nvmf/common.sh@116 -- # sync 00:22:30.308 15:46:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:30.308 15:46:09 -- nvmf/common.sh@119 -- # set +e 00:22:30.308 15:46:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:30.308 15:46:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:30.308 rmmod nvme_tcp 00:22:30.308 rmmod nvme_fabrics 00:22:30.308 rmmod nvme_keyring 00:22:30.308 15:46:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:30.308 15:46:09 -- nvmf/common.sh@123 -- # set -e 00:22:30.308 15:46:09 -- nvmf/common.sh@124 -- # return 0 00:22:30.308 15:46:09 -- nvmf/common.sh@477 -- # '[' -n 2188959 ']' 00:22:30.308 15:46:09 -- nvmf/common.sh@478 -- # killprocess 2188959 00:22:30.308 15:46:09 -- common/autotest_common.sh@926 -- # '[' -z 2188959 ']' 00:22:30.308 15:46:09 -- common/autotest_common.sh@930 -- # kill -0 2188959 00:22:30.308 15:46:09 -- common/autotest_common.sh@931 -- # uname 00:22:30.308 15:46:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:30.308 15:46:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2188959 00:22:30.308 15:46:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:30.308 15:46:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:30.308 15:46:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2188959' 00:22:30.308 killing process with pid 2188959 00:22:30.308 15:46:09 -- common/autotest_common.sh@945 -- # kill 2188959 00:22:30.308 [2024-07-10 15:46:09.603255] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:22:30.308 15:46:09 -- common/autotest_common.sh@950 -- # wait 2188959 00:22:30.567 15:46:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:30.567 15:46:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:30.567 15:46:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:30.567 15:46:09 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:30.567 15:46:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:30.567 15:46:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.567 15:46:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:30.567 15:46:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.100 15:46:11 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:33.100 00:22:33.100 real 0m6.067s 00:22:33.100 user 0m7.136s 00:22:33.100 sys 0m1.872s 00:22:33.100 15:46:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:33.100 15:46:11 -- common/autotest_common.sh@10 -- # set +x 00:22:33.100 ************************************ 00:22:33.100 END TEST nvmf_identify 00:22:33.100 ************************************ 00:22:33.100 15:46:11 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:33.100 15:46:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:33.100 15:46:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:33.100 15:46:11 -- common/autotest_common.sh@10 -- # set +x 00:22:33.100 ************************************ 00:22:33.100 START TEST nvmf_perf 00:22:33.100 ************************************ 00:22:33.100 15:46:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:33.100 * Looking for test storage... 00:22:33.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:33.100 15:46:12 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:33.100 15:46:12 -- nvmf/common.sh@7 -- # uname -s 00:22:33.100 15:46:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:33.100 15:46:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:33.100 15:46:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:33.100 15:46:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:33.100 15:46:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:33.100 15:46:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:33.100 15:46:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:33.100 15:46:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:33.100 15:46:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:33.100 15:46:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:33.100 15:46:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.100 15:46:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:33.100 15:46:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:33.100 15:46:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:33.100 15:46:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:33.100 15:46:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:33.100 15:46:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:33.100 15:46:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:33.100 15:46:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:33.100 15:46:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.100 15:46:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.100 15:46:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.100 15:46:12 -- paths/export.sh@5 -- # export PATH 00:22:33.100 15:46:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:33.100 15:46:12 -- nvmf/common.sh@46 -- # : 0 00:22:33.100 15:46:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:33.100 15:46:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:33.100 15:46:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:33.100 15:46:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:33.100 15:46:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:33.100 15:46:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:33.100 15:46:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:33.100 15:46:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:33.100 15:46:12 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:33.100 15:46:12 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:33.100 15:46:12 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:33.100 15:46:12 -- host/perf.sh@17 -- # nvmftestinit 00:22:33.100 15:46:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:33.100 15:46:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:33.100 15:46:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:33.100 15:46:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:33.100 15:46:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:33.100 15:46:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:33.100 15:46:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:33.100 15:46:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:33.101 15:46:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:33.101 15:46:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:33.101 15:46:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:33.101 15:46:12 -- common/autotest_common.sh@10 -- # set +x 00:22:35.000 15:46:13 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:35.000 15:46:13 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:35.000 15:46:13 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:35.000 15:46:13 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:35.000 15:46:13 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:35.000 15:46:13 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:35.000 15:46:13 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:35.000 15:46:13 -- nvmf/common.sh@294 -- # net_devs=() 00:22:35.000 15:46:13 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:35.000 15:46:13 -- nvmf/common.sh@295 -- # e810=() 00:22:35.000 15:46:13 -- nvmf/common.sh@295 -- # local -ga e810 00:22:35.000 15:46:13 -- nvmf/common.sh@296 -- # x722=() 00:22:35.000 15:46:13 -- nvmf/common.sh@296 -- # local -ga x722 00:22:35.000 15:46:13 -- nvmf/common.sh@297 -- # mlx=() 00:22:35.000 15:46:13 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:35.000 15:46:13 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:35.000 15:46:13 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:35.000 15:46:13 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:35.000 15:46:13 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:35.000 15:46:13 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:35.000 15:46:13 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:35.000 15:46:13 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:35.000 15:46:13 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:35.000 15:46:13 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:35.000 15:46:13 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:35.000 15:46:13 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:35.000 15:46:13 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:35.000 15:46:13 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:22:35.000 15:46:13 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:22:35.000 15:46:13 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:22:35.000 15:46:13 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:22:35.000 15:46:13 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:35.000 15:46:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:35.000 15:46:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:35.000 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:35.001 15:46:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:35.001 15:46:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:35.001 15:46:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.001 15:46:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.001 15:46:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:35.001 15:46:13 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:35.001 15:46:13 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:35.001 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:35.001 15:46:13 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:22:35.001 15:46:13 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:22:35.001 15:46:13 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.001 15:46:13 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.001 15:46:13 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:22:35.001 15:46:13 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:35.001 15:46:13 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:22:35.001 15:46:13 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:22:35.001 15:46:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:35.001 15:46:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.001 15:46:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:35.001 15:46:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.001 15:46:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:35.001 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:35.001 15:46:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.001 15:46:13 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:35.001 15:46:13 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.001 15:46:13 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:35.001 15:46:13 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.001 15:46:13 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:35.001 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:35.001 15:46:13 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.001 15:46:13 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:35.001 15:46:13 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:35.001 15:46:13 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:35.001 15:46:13 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:22:35.001 15:46:13 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:22:35.001 15:46:13 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.001 15:46:13 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:35.001 15:46:13 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:35.001 15:46:13 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:22:35.001 15:46:13 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:35.001 15:46:13 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:35.001 15:46:13 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:22:35.001 15:46:13 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:35.001 15:46:13 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.001 15:46:13 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:22:35.001 15:46:13 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:22:35.001 15:46:13 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:22:35.001 15:46:13 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:35.001 15:46:13 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:35.001 15:46:13 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:35.001 15:46:13 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:22:35.001 15:46:13 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:35.001 15:46:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:35.001 15:46:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:35.001 15:46:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:22:35.001 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.001 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:22:35.001 00:22:35.001 --- 10.0.0.2 ping statistics --- 00:22:35.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.001 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:22:35.001 15:46:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:35.001 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.001 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:22:35.001 00:22:35.001 --- 10.0.0.1 ping statistics --- 00:22:35.001 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.001 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:22:35.001 15:46:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.001 15:46:14 -- nvmf/common.sh@410 -- # return 0 00:22:35.001 15:46:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:35.001 15:46:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.001 15:46:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:35.001 15:46:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:35.001 15:46:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.001 15:46:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:35.001 15:46:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:35.001 15:46:14 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:35.001 15:46:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:35.001 15:46:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:35.001 15:46:14 -- common/autotest_common.sh@10 -- # set +x 00:22:35.001 15:46:14 -- nvmf/common.sh@469 -- # nvmfpid=2191067 00:22:35.001 15:46:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:35.001 15:46:14 -- nvmf/common.sh@470 -- # waitforlisten 2191067 00:22:35.001 15:46:14 -- common/autotest_common.sh@819 -- # '[' -z 2191067 ']' 00:22:35.001 15:46:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.001 15:46:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:35.001 15:46:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.001 15:46:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:35.001 15:46:14 -- common/autotest_common.sh@10 -- # set +x 00:22:35.001 [2024-07-10 15:46:14.106971] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:35.001 [2024-07-10 15:46:14.107054] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.001 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.001 [2024-07-10 15:46:14.175301] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:35.001 [2024-07-10 15:46:14.291881] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:35.001 [2024-07-10 15:46:14.292036] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.001 [2024-07-10 15:46:14.292056] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.001 [2024-07-10 15:46:14.292071] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.001 [2024-07-10 15:46:14.292130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.001 [2024-07-10 15:46:14.292206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.001 [2024-07-10 15:46:14.292228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:35.001 [2024-07-10 15:46:14.292234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:35.932 15:46:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:35.932 15:46:15 -- common/autotest_common.sh@852 -- # return 0 00:22:35.932 15:46:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:35.932 15:46:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:35.932 15:46:15 -- common/autotest_common.sh@10 -- # set +x 00:22:35.932 15:46:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:35.932 15:46:15 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:35.932 15:46:15 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:39.199 15:46:18 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:39.199 15:46:18 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:39.199 15:46:18 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:22:39.199 15:46:18 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:39.457 15:46:18 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:39.457 15:46:18 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:22:39.457 15:46:18 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:39.457 15:46:18 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:39.457 15:46:18 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:39.714 [2024-07-10 15:46:19.011219] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.714 15:46:19 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:39.972 15:46:19 -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:39.972 15:46:19 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:40.229 15:46:19 -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:40.229 15:46:19 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:40.487 15:46:19 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.744 [2024-07-10 15:46:19.962795] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.744 15:46:19 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:41.037 15:46:20 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:22:41.037 15:46:20 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:41.037 15:46:20 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:41.037 15:46:20 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:22:42.433 Initializing NVMe Controllers 00:22:42.433 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:22:42.433 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:22:42.433 Initialization complete. Launching workers. 00:22:42.433 ======================================================== 00:22:42.433 Latency(us) 00:22:42.433 Device Information : IOPS MiB/s Average min max 00:22:42.433 PCIE (0000:88:00.0) NSID 1 from core 0: 86469.57 337.77 369.51 15.87 4379.43 00:22:42.433 ======================================================== 00:22:42.433 Total : 86469.57 337.77 369.51 15.87 4379.43 00:22:42.433 00:22:42.433 15:46:21 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:42.433 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.365 Initializing NVMe Controllers 00:22:43.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:43.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:43.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:43.365 Initialization complete. Launching workers. 00:22:43.365 ======================================================== 00:22:43.365 Latency(us) 00:22:43.365 Device Information : IOPS MiB/s Average min max 00:22:43.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 43.00 0.17 23944.72 190.74 45720.46 00:22:43.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 41.00 0.16 24499.81 7961.59 47926.38 00:22:43.365 ======================================================== 00:22:43.365 Total : 84.00 0.33 24215.65 190.74 47926.38 00:22:43.365 00:22:43.365 15:46:22 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:43.365 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.259 Initializing NVMe Controllers 00:22:45.259 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:45.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:45.259 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:45.259 Initialization complete. Launching workers. 00:22:45.259 ======================================================== 00:22:45.259 Latency(us) 00:22:45.259 Device Information : IOPS MiB/s Average min max 00:22:45.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8005.10 31.27 4004.68 612.56 11544.08 00:22:45.259 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3594.59 14.04 8915.29 4597.15 31438.26 00:22:45.259 ======================================================== 00:22:45.259 Total : 11599.69 45.31 5526.41 612.56 31438.26 00:22:45.259 00:22:45.259 15:46:24 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:45.259 15:46:24 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:45.259 15:46:24 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:45.259 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.786 Initializing NVMe Controllers 00:22:47.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:47.786 Controller IO queue size 128, less than required. 00:22:47.786 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.786 Controller IO queue size 128, less than required. 00:22:47.786 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:47.786 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:47.786 Initialization complete. Launching workers. 00:22:47.786 ======================================================== 00:22:47.786 Latency(us) 00:22:47.786 Device Information : IOPS MiB/s Average min max 00:22:47.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1063.50 265.87 123715.90 71985.90 207732.55 00:22:47.786 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 579.50 144.87 232800.25 66965.88 360104.90 00:22:47.786 ======================================================== 00:22:47.786 Total : 1643.00 410.75 162190.87 66965.88 360104.90 00:22:47.786 00:22:47.786 15:46:26 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:47.786 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.786 No valid NVMe controllers or AIO or URING devices found 00:22:47.786 Initializing NVMe Controllers 00:22:47.786 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:47.786 Controller IO queue size 128, less than required. 00:22:47.786 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.786 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:47.786 Controller IO queue size 128, less than required. 00:22:47.786 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.786 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:47.786 WARNING: Some requested NVMe devices were skipped 00:22:47.786 15:46:26 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:47.786 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.312 Initializing NVMe Controllers 00:22:50.312 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:50.312 Controller IO queue size 128, less than required. 00:22:50.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:50.312 Controller IO queue size 128, less than required. 00:22:50.312 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:50.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:50.312 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:50.312 Initialization complete. Launching workers. 00:22:50.312 00:22:50.312 ==================== 00:22:50.312 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:50.312 TCP transport: 00:22:50.312 polls: 26599 00:22:50.312 idle_polls: 8133 00:22:50.312 sock_completions: 18466 00:22:50.312 nvme_completions: 3791 00:22:50.312 submitted_requests: 5777 00:22:50.312 queued_requests: 1 00:22:50.312 00:22:50.312 ==================== 00:22:50.312 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:50.312 TCP transport: 00:22:50.312 polls: 26895 00:22:50.312 idle_polls: 8461 00:22:50.312 sock_completions: 18434 00:22:50.312 nvme_completions: 4099 00:22:50.312 submitted_requests: 6291 00:22:50.312 queued_requests: 1 00:22:50.312 ======================================================== 00:22:50.312 Latency(us) 00:22:50.312 Device Information : IOPS MiB/s Average min max 00:22:50.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1010.94 252.74 130283.57 60020.52 199767.91 00:22:50.312 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1088.44 272.11 119496.02 55192.70 157962.83 00:22:50.312 ======================================================== 00:22:50.312 Total : 2099.38 524.85 124690.69 55192.70 199767.91 00:22:50.312 00:22:50.312 15:46:29 -- host/perf.sh@66 -- # sync 00:22:50.312 15:46:29 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:50.312 15:46:29 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:22:50.312 15:46:29 -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:22:50.312 15:46:29 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:22:53.590 15:46:32 -- host/perf.sh@72 -- # ls_guid=c93c606c-1354-4188-92ba-961f345229f4 00:22:53.590 15:46:32 -- host/perf.sh@73 -- # get_lvs_free_mb c93c606c-1354-4188-92ba-961f345229f4 00:22:53.590 15:46:32 -- common/autotest_common.sh@1343 -- # local lvs_uuid=c93c606c-1354-4188-92ba-961f345229f4 00:22:53.590 15:46:32 -- common/autotest_common.sh@1344 -- # local lvs_info 00:22:53.590 15:46:32 -- common/autotest_common.sh@1345 -- # local fc 00:22:53.590 15:46:32 -- common/autotest_common.sh@1346 -- # local cs 00:22:53.590 15:46:32 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:53.846 15:46:33 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:22:53.846 { 00:22:53.846 "uuid": "c93c606c-1354-4188-92ba-961f345229f4", 00:22:53.846 "name": "lvs_0", 00:22:53.846 "base_bdev": "Nvme0n1", 00:22:53.846 "total_data_clusters": 238234, 00:22:53.846 "free_clusters": 238234, 00:22:53.846 "block_size": 512, 00:22:53.846 "cluster_size": 4194304 00:22:53.846 } 00:22:53.846 ]' 00:22:53.846 15:46:33 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="c93c606c-1354-4188-92ba-961f345229f4") .free_clusters' 00:22:53.846 15:46:33 -- common/autotest_common.sh@1348 -- # fc=238234 00:22:53.846 15:46:33 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="c93c606c-1354-4188-92ba-961f345229f4") .cluster_size' 00:22:53.846 15:46:33 -- common/autotest_common.sh@1349 -- # cs=4194304 00:22:53.846 15:46:33 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:22:53.846 15:46:33 -- common/autotest_common.sh@1353 -- # echo 952936 00:22:53.846 952936 00:22:53.846 15:46:33 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:22:53.846 15:46:33 -- host/perf.sh@78 -- # free_mb=20480 00:22:53.846 15:46:33 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c93c606c-1354-4188-92ba-961f345229f4 lbd_0 20480 00:22:54.411 15:46:33 -- host/perf.sh@80 -- # lb_guid=40f5f898-4254-48ff-a88f-272382db48ef 00:22:54.411 15:46:33 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 40f5f898-4254-48ff-a88f-272382db48ef lvs_n_0 00:22:55.343 15:46:34 -- host/perf.sh@83 -- # ls_nested_guid=fb0fae6f-7775-4a19-af62-fb85495f7eb9 00:22:55.343 15:46:34 -- host/perf.sh@84 -- # get_lvs_free_mb fb0fae6f-7775-4a19-af62-fb85495f7eb9 00:22:55.343 15:46:34 -- common/autotest_common.sh@1343 -- # local lvs_uuid=fb0fae6f-7775-4a19-af62-fb85495f7eb9 00:22:55.343 15:46:34 -- common/autotest_common.sh@1344 -- # local lvs_info 00:22:55.343 15:46:34 -- common/autotest_common.sh@1345 -- # local fc 00:22:55.343 15:46:34 -- common/autotest_common.sh@1346 -- # local cs 00:22:55.343 15:46:34 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:55.343 15:46:34 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:22:55.343 { 00:22:55.343 "uuid": "c93c606c-1354-4188-92ba-961f345229f4", 00:22:55.343 "name": "lvs_0", 00:22:55.343 "base_bdev": "Nvme0n1", 00:22:55.343 "total_data_clusters": 238234, 00:22:55.343 "free_clusters": 233114, 00:22:55.343 "block_size": 512, 00:22:55.343 "cluster_size": 4194304 00:22:55.343 }, 00:22:55.343 { 00:22:55.343 "uuid": "fb0fae6f-7775-4a19-af62-fb85495f7eb9", 00:22:55.343 "name": "lvs_n_0", 00:22:55.343 "base_bdev": "40f5f898-4254-48ff-a88f-272382db48ef", 00:22:55.343 "total_data_clusters": 5114, 00:22:55.343 "free_clusters": 5114, 00:22:55.343 "block_size": 512, 00:22:55.343 "cluster_size": 4194304 00:22:55.343 } 00:22:55.343 ]' 00:22:55.343 15:46:34 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="fb0fae6f-7775-4a19-af62-fb85495f7eb9") .free_clusters' 00:22:55.343 15:46:34 -- common/autotest_common.sh@1348 -- # fc=5114 00:22:55.343 15:46:34 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="fb0fae6f-7775-4a19-af62-fb85495f7eb9") .cluster_size' 00:22:55.601 15:46:34 -- common/autotest_common.sh@1349 -- # cs=4194304 00:22:55.601 15:46:34 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:22:55.601 15:46:34 -- common/autotest_common.sh@1353 -- # echo 20456 00:22:55.601 20456 00:22:55.601 15:46:34 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:22:55.601 15:46:34 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fb0fae6f-7775-4a19-af62-fb85495f7eb9 lbd_nest_0 20456 00:22:55.858 15:46:34 -- host/perf.sh@88 -- # lb_nested_guid=d8e9f9e7-4a46-4dde-8275-8790ae9b5e9d 00:22:55.858 15:46:34 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:55.858 15:46:35 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:22:55.858 15:46:35 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 d8e9f9e7-4a46-4dde-8275-8790ae9b5e9d 00:22:56.115 15:46:35 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:56.372 15:46:35 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:22:56.372 15:46:35 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:22:56.372 15:46:35 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:22:56.372 15:46:35 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:22:56.372 15:46:35 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:56.372 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.560 Initializing NVMe Controllers 00:23:08.560 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:08.560 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:08.560 Initialization complete. Launching workers. 00:23:08.560 ======================================================== 00:23:08.560 Latency(us) 00:23:08.560 Device Information : IOPS MiB/s Average min max 00:23:08.560 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.69 0.02 21961.32 233.58 45058.68 00:23:08.560 ======================================================== 00:23:08.560 Total : 45.69 0.02 21961.32 233.58 45058.68 00:23:08.560 00:23:08.560 15:46:46 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:08.560 15:46:46 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:08.560 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.517 Initializing NVMe Controllers 00:23:18.517 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:18.517 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:18.517 Initialization complete. Launching workers. 00:23:18.517 ======================================================== 00:23:18.517 Latency(us) 00:23:18.517 Device Information : IOPS MiB/s Average min max 00:23:18.517 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 80.00 10.00 12505.04 6008.77 47882.30 00:23:18.517 ======================================================== 00:23:18.517 Total : 80.00 10.00 12505.04 6008.77 47882.30 00:23:18.517 00:23:18.517 15:46:56 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:18.517 15:46:56 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:18.517 15:46:56 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:18.517 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.482 Initializing NVMe Controllers 00:23:28.482 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:28.482 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:28.482 Initialization complete. Launching workers. 00:23:28.482 ======================================================== 00:23:28.482 Latency(us) 00:23:28.482 Device Information : IOPS MiB/s Average min max 00:23:28.482 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7552.78 3.69 4237.55 293.67 9701.46 00:23:28.482 ======================================================== 00:23:28.482 Total : 7552.78 3.69 4237.55 293.67 9701.46 00:23:28.482 00:23:28.482 15:47:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:28.482 15:47:06 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:28.482 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.466 Initializing NVMe Controllers 00:23:38.466 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:38.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:38.466 Initialization complete. Launching workers. 00:23:38.466 ======================================================== 00:23:38.466 Latency(us) 00:23:38.466 Device Information : IOPS MiB/s Average min max 00:23:38.466 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1619.53 202.44 19792.59 1765.74 40195.55 00:23:38.466 ======================================================== 00:23:38.466 Total : 1619.53 202.44 19792.59 1765.74 40195.55 00:23:38.466 00:23:38.466 15:47:16 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:23:38.466 15:47:16 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:38.466 15:47:16 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:38.466 EAL: No free 2048 kB hugepages reported on node 1 00:23:48.493 Initializing NVMe Controllers 00:23:48.493 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:48.493 Controller IO queue size 128, less than required. 00:23:48.493 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:48.493 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:48.493 Initialization complete. Launching workers. 00:23:48.493 ======================================================== 00:23:48.493 Latency(us) 00:23:48.493 Device Information : IOPS MiB/s Average min max 00:23:48.493 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12053.40 5.89 10626.88 1735.52 24751.50 00:23:48.493 ======================================================== 00:23:48.493 Total : 12053.40 5.89 10626.88 1735.52 24751.50 00:23:48.493 00:23:48.493 15:47:27 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:23:48.493 15:47:27 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:48.493 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.686 Initializing NVMe Controllers 00:24:00.686 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:00.686 Controller IO queue size 128, less than required. 00:24:00.686 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:00.686 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:00.686 Initialization complete. Launching workers. 00:24:00.686 ======================================================== 00:24:00.687 Latency(us) 00:24:00.687 Device Information : IOPS MiB/s Average min max 00:24:00.687 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1225.90 153.24 104638.81 24951.67 192800.02 00:24:00.687 ======================================================== 00:24:00.687 Total : 1225.90 153.24 104638.81 24951.67 192800.02 00:24:00.687 00:24:00.687 15:47:37 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:00.687 15:47:38 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d8e9f9e7-4a46-4dde-8275-8790ae9b5e9d 00:24:00.687 15:47:38 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:24:00.687 15:47:39 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 40f5f898-4254-48ff-a88f-272382db48ef 00:24:00.687 15:47:39 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:24:00.687 15:47:39 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:00.687 15:47:39 -- host/perf.sh@114 -- # nvmftestfini 00:24:00.687 15:47:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:00.687 15:47:39 -- nvmf/common.sh@116 -- # sync 00:24:00.687 15:47:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:00.687 15:47:39 -- nvmf/common.sh@119 -- # set +e 00:24:00.687 15:47:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:00.687 15:47:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:00.687 rmmod nvme_tcp 00:24:00.687 rmmod nvme_fabrics 00:24:00.687 rmmod nvme_keyring 00:24:00.687 15:47:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:00.687 15:47:39 -- nvmf/common.sh@123 -- # set -e 00:24:00.687 15:47:39 -- nvmf/common.sh@124 -- # return 0 00:24:00.687 15:47:39 -- nvmf/common.sh@477 -- # '[' -n 2191067 ']' 00:24:00.687 15:47:39 -- nvmf/common.sh@478 -- # killprocess 2191067 00:24:00.687 15:47:39 -- common/autotest_common.sh@926 -- # '[' -z 2191067 ']' 00:24:00.687 15:47:39 -- common/autotest_common.sh@930 -- # kill -0 2191067 00:24:00.687 15:47:39 -- common/autotest_common.sh@931 -- # uname 00:24:00.687 15:47:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:00.687 15:47:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2191067 00:24:00.687 15:47:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:00.687 15:47:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:00.687 15:47:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2191067' 00:24:00.687 killing process with pid 2191067 00:24:00.687 15:47:39 -- common/autotest_common.sh@945 -- # kill 2191067 00:24:00.687 15:47:39 -- common/autotest_common.sh@950 -- # wait 2191067 00:24:02.059 15:47:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:02.059 15:47:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:02.059 15:47:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:02.059 15:47:41 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:02.059 15:47:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:02.059 15:47:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.059 15:47:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:02.059 15:47:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.591 15:47:43 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:04.591 00:24:04.591 real 1m31.405s 00:24:04.591 user 5m39.438s 00:24:04.591 sys 0m15.156s 00:24:04.591 15:47:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:04.591 15:47:43 -- common/autotest_common.sh@10 -- # set +x 00:24:04.591 ************************************ 00:24:04.591 END TEST nvmf_perf 00:24:04.591 ************************************ 00:24:04.591 15:47:43 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:04.592 15:47:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:04.592 15:47:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:04.592 15:47:43 -- common/autotest_common.sh@10 -- # set +x 00:24:04.592 ************************************ 00:24:04.592 START TEST nvmf_fio_host 00:24:04.592 ************************************ 00:24:04.592 15:47:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:04.592 * Looking for test storage... 00:24:04.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:04.592 15:47:43 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.592 15:47:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.592 15:47:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.592 15:47:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.592 15:47:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.592 15:47:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.592 15:47:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.592 15:47:43 -- paths/export.sh@5 -- # export PATH 00:24:04.592 15:47:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.592 15:47:43 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.592 15:47:43 -- nvmf/common.sh@7 -- # uname -s 00:24:04.592 15:47:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.592 15:47:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.592 15:47:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.592 15:47:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.592 15:47:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.592 15:47:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.592 15:47:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.592 15:47:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.592 15:47:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.592 15:47:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.592 15:47:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:04.592 15:47:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:04.592 15:47:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.592 15:47:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.592 15:47:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.592 15:47:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.592 15:47:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.592 15:47:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.592 15:47:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.592 15:47:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.592 15:47:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.592 15:47:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.592 15:47:43 -- paths/export.sh@5 -- # export PATH 00:24:04.592 15:47:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.592 15:47:43 -- nvmf/common.sh@46 -- # : 0 00:24:04.592 15:47:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:04.592 15:47:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:04.592 15:47:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:04.592 15:47:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.592 15:47:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.592 15:47:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:04.592 15:47:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:04.592 15:47:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:04.592 15:47:43 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:04.592 15:47:43 -- host/fio.sh@14 -- # nvmftestinit 00:24:04.592 15:47:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:04.592 15:47:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.592 15:47:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:04.592 15:47:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:04.592 15:47:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:04.592 15:47:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.592 15:47:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.592 15:47:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.592 15:47:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:04.592 15:47:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:04.592 15:47:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:04.592 15:47:43 -- common/autotest_common.sh@10 -- # set +x 00:24:06.493 15:47:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:06.493 15:47:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:06.493 15:47:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:06.493 15:47:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:06.493 15:47:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:06.493 15:47:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:06.493 15:47:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:06.493 15:47:45 -- nvmf/common.sh@294 -- # net_devs=() 00:24:06.493 15:47:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:06.493 15:47:45 -- nvmf/common.sh@295 -- # e810=() 00:24:06.493 15:47:45 -- nvmf/common.sh@295 -- # local -ga e810 00:24:06.493 15:47:45 -- nvmf/common.sh@296 -- # x722=() 00:24:06.493 15:47:45 -- nvmf/common.sh@296 -- # local -ga x722 00:24:06.493 15:47:45 -- nvmf/common.sh@297 -- # mlx=() 00:24:06.493 15:47:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:06.493 15:47:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:06.493 15:47:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:06.493 15:47:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:06.493 15:47:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:06.493 15:47:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:06.493 15:47:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:06.493 15:47:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:06.493 15:47:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:06.493 15:47:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:06.493 15:47:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:06.493 15:47:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:06.493 15:47:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:06.493 15:47:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:06.493 15:47:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:06.493 15:47:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:06.493 15:47:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:06.493 15:47:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:06.493 15:47:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:06.493 15:47:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:06.493 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:06.493 15:47:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:06.493 15:47:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:06.493 15:47:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.493 15:47:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.493 15:47:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:06.493 15:47:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:06.493 15:47:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:06.493 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:06.493 15:47:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:06.493 15:47:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:06.493 15:47:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:06.493 15:47:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:06.493 15:47:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:06.493 15:47:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:06.493 15:47:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:06.493 15:47:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:06.493 15:47:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:06.493 15:47:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.493 15:47:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:06.493 15:47:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.493 15:47:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:06.493 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:06.493 15:47:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.493 15:47:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:06.493 15:47:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:06.493 15:47:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:06.493 15:47:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:06.493 15:47:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:06.493 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:06.493 15:47:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:06.493 15:47:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:06.493 15:47:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:06.493 15:47:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:06.493 15:47:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:06.493 15:47:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:06.493 15:47:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:06.493 15:47:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:06.493 15:47:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:06.493 15:47:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:06.493 15:47:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:06.493 15:47:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:06.493 15:47:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:06.493 15:47:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:06.493 15:47:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:06.493 15:47:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:06.493 15:47:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:06.493 15:47:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:06.493 15:47:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:06.493 15:47:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:06.493 15:47:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:06.493 15:47:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:06.493 15:47:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:06.493 15:47:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:06.493 15:47:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:06.493 15:47:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:06.493 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:06.493 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:24:06.493 00:24:06.493 --- 10.0.0.2 ping statistics --- 00:24:06.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.493 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:24:06.493 15:47:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:06.493 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:06.493 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:24:06.493 00:24:06.493 --- 10.0.0.1 ping statistics --- 00:24:06.493 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:06.493 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:24:06.493 15:47:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:06.493 15:47:45 -- nvmf/common.sh@410 -- # return 0 00:24:06.493 15:47:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:06.493 15:47:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:06.493 15:47:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:06.493 15:47:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:06.493 15:47:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:06.493 15:47:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:06.493 15:47:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:06.493 15:47:45 -- host/fio.sh@16 -- # [[ y != y ]] 00:24:06.493 15:47:45 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:06.493 15:47:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:06.493 15:47:45 -- common/autotest_common.sh@10 -- # set +x 00:24:06.493 15:47:45 -- host/fio.sh@24 -- # nvmfpid=2203504 00:24:06.493 15:47:45 -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:06.493 15:47:45 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:06.493 15:47:45 -- host/fio.sh@28 -- # waitforlisten 2203504 00:24:06.493 15:47:45 -- common/autotest_common.sh@819 -- # '[' -z 2203504 ']' 00:24:06.493 15:47:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:06.493 15:47:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:06.493 15:47:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:06.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:06.493 15:47:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:06.493 15:47:45 -- common/autotest_common.sh@10 -- # set +x 00:24:06.493 [2024-07-10 15:47:45.587967] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:06.493 [2024-07-10 15:47:45.588064] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:06.493 EAL: No free 2048 kB hugepages reported on node 1 00:24:06.493 [2024-07-10 15:47:45.657821] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:06.493 [2024-07-10 15:47:45.773132] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:06.493 [2024-07-10 15:47:45.773305] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:06.493 [2024-07-10 15:47:45.773326] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:06.493 [2024-07-10 15:47:45.773341] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:06.493 [2024-07-10 15:47:45.773403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:06.493 [2024-07-10 15:47:45.773511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:06.493 [2024-07-10 15:47:45.773513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:06.493 [2024-07-10 15:47:45.773487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.424 15:47:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:07.424 15:47:46 -- common/autotest_common.sh@852 -- # return 0 00:24:07.424 15:47:46 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:07.424 [2024-07-10 15:47:46.741522] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.424 15:47:46 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:07.424 15:47:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:07.424 15:47:46 -- common/autotest_common.sh@10 -- # set +x 00:24:07.424 15:47:46 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:07.681 Malloc1 00:24:07.681 15:47:47 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:07.938 15:47:47 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:08.196 15:47:47 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:08.453 [2024-07-10 15:47:47.713167] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.453 15:47:47 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:08.710 15:47:47 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:08.710 15:47:47 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:08.710 15:47:47 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:08.710 15:47:47 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:08.710 15:47:47 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:08.710 15:47:47 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:08.710 15:47:47 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:08.710 15:47:47 -- common/autotest_common.sh@1320 -- # shift 00:24:08.710 15:47:47 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:08.710 15:47:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:08.710 15:47:47 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:08.710 15:47:47 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:08.710 15:47:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:08.710 15:47:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:08.710 15:47:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:08.710 15:47:47 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:08.710 15:47:47 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:08.710 15:47:47 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:08.710 15:47:47 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:08.710 15:47:47 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:08.710 15:47:47 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:08.710 15:47:47 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:08.710 15:47:47 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:08.967 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:08.967 fio-3.35 00:24:08.967 Starting 1 thread 00:24:08.967 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.494 00:24:11.494 test: (groupid=0, jobs=1): err= 0: pid=2203880: Wed Jul 10 15:47:50 2024 00:24:11.494 read: IOPS=9348, BW=36.5MiB/s (38.3MB/s)(73.3MiB/2006msec) 00:24:11.494 slat (nsec): min=1971, max=111476, avg=2413.91, stdev=1428.19 00:24:11.494 clat (usec): min=3169, max=12893, avg=7582.57, stdev=692.62 00:24:11.494 lat (usec): min=3191, max=12895, avg=7584.98, stdev=692.57 00:24:11.494 clat percentiles (usec): 00:24:11.494 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6849], 20.00th=[ 7111], 00:24:11.494 | 30.00th=[ 7242], 40.00th=[ 7373], 50.00th=[ 7570], 60.00th=[ 7701], 00:24:11.494 | 70.00th=[ 7832], 80.00th=[ 7963], 90.00th=[ 8291], 95.00th=[ 8586], 00:24:11.494 | 99.00th=[10290], 99.50th=[10814], 99.90th=[11863], 99.95th=[12125], 00:24:11.494 | 99.99th=[12911] 00:24:11.494 bw ( KiB/s): min=36632, max=38400, per=99.93%, avg=37370.00, stdev=874.34, samples=4 00:24:11.494 iops : min= 9158, max= 9600, avg=9342.50, stdev=218.59, samples=4 00:24:11.494 write: IOPS=9354, BW=36.5MiB/s (38.3MB/s)(73.3MiB/2006msec); 0 zone resets 00:24:11.494 slat (nsec): min=2060, max=93316, avg=2502.25, stdev=1214.51 00:24:11.494 clat (usec): min=1887, max=11702, avg=6057.55, stdev=598.65 00:24:11.494 lat (usec): min=1893, max=11704, avg=6060.05, stdev=598.63 00:24:11.494 clat percentiles (usec): 00:24:11.494 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5473], 20.00th=[ 5669], 00:24:11.494 | 30.00th=[ 5800], 40.00th=[ 5932], 50.00th=[ 5997], 60.00th=[ 6128], 00:24:11.494 | 70.00th=[ 6259], 80.00th=[ 6390], 90.00th=[ 6652], 95.00th=[ 6849], 00:24:11.494 | 99.00th=[ 8455], 99.50th=[ 8717], 99.90th=[ 9896], 99.95th=[10159], 00:24:11.494 | 99.99th=[11338] 00:24:11.494 bw ( KiB/s): min=36608, max=38016, per=99.99%, avg=37414.00, stdev=588.80, samples=4 00:24:11.494 iops : min= 9152, max= 9504, avg=9353.50, stdev=147.20, samples=4 00:24:11.494 lat (msec) : 2=0.02%, 4=0.12%, 10=99.05%, 20=0.81% 00:24:11.494 cpu : usr=57.36%, sys=36.51%, ctx=22, majf=0, minf=5 00:24:11.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:11.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:11.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:11.494 issued rwts: total=18754,18765,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:11.494 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:11.494 00:24:11.494 Run status group 0 (all jobs): 00:24:11.494 READ: bw=36.5MiB/s (38.3MB/s), 36.5MiB/s-36.5MiB/s (38.3MB/s-38.3MB/s), io=73.3MiB (76.8MB), run=2006-2006msec 00:24:11.494 WRITE: bw=36.5MiB/s (38.3MB/s), 36.5MiB/s-36.5MiB/s (38.3MB/s-38.3MB/s), io=73.3MiB (76.9MB), run=2006-2006msec 00:24:11.494 15:47:50 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:11.494 15:47:50 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:11.494 15:47:50 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:11.494 15:47:50 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:11.494 15:47:50 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:11.494 15:47:50 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:11.494 15:47:50 -- common/autotest_common.sh@1320 -- # shift 00:24:11.494 15:47:50 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:11.494 15:47:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:11.494 15:47:50 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:11.494 15:47:50 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:11.494 15:47:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:11.494 15:47:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:11.494 15:47:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:11.494 15:47:50 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:11.494 15:47:50 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:11.494 15:47:50 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:11.494 15:47:50 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:11.494 15:47:50 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:11.494 15:47:50 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:11.494 15:47:50 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:11.494 15:47:50 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:11.494 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:11.494 fio-3.35 00:24:11.494 Starting 1 thread 00:24:11.494 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.021 00:24:14.021 test: (groupid=0, jobs=1): err= 0: pid=2204347: Wed Jul 10 15:47:53 2024 00:24:14.021 read: IOPS=8063, BW=126MiB/s (132MB/s)(253MiB/2007msec) 00:24:14.021 slat (nsec): min=3037, max=95824, avg=3832.54, stdev=1921.81 00:24:14.021 clat (usec): min=2325, max=54710, avg=9697.07, stdev=5116.79 00:24:14.021 lat (usec): min=2329, max=54714, avg=9700.90, stdev=5116.80 00:24:14.021 clat percentiles (usec): 00:24:14.021 | 1.00th=[ 4752], 5.00th=[ 5735], 10.00th=[ 6390], 20.00th=[ 7177], 00:24:14.021 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 9110], 60.00th=[ 9765], 00:24:14.021 | 70.00th=[10290], 80.00th=[10945], 90.00th=[12125], 95.00th=[13698], 00:24:14.021 | 99.00th=[46400], 99.50th=[50070], 99.90th=[53740], 99.95th=[53740], 00:24:14.021 | 99.99th=[54789] 00:24:14.021 bw ( KiB/s): min=45440, max=74144, per=50.31%, avg=64912.00, stdev=13309.65, samples=4 00:24:14.021 iops : min= 2840, max= 4634, avg=4057.00, stdev=831.85, samples=4 00:24:14.021 write: IOPS=4630, BW=72.4MiB/s (75.9MB/s)(132MiB/1827msec); 0 zone resets 00:24:14.021 slat (usec): min=30, max=190, avg=34.72, stdev= 6.37 00:24:14.021 clat (usec): min=2588, max=58170, avg=10847.97, stdev=3369.15 00:24:14.021 lat (usec): min=2620, max=58202, avg=10882.70, stdev=3369.40 00:24:14.021 clat percentiles (usec): 00:24:14.021 | 1.00th=[ 7373], 5.00th=[ 8094], 10.00th=[ 8586], 20.00th=[ 9241], 00:24:14.021 | 30.00th=[ 9634], 40.00th=[10159], 50.00th=[10552], 60.00th=[10945], 00:24:14.021 | 70.00th=[11469], 80.00th=[11994], 90.00th=[13042], 95.00th=[13960], 00:24:14.021 | 99.00th=[15270], 99.50th=[17171], 99.90th=[57410], 99.95th=[57934], 00:24:14.021 | 99.99th=[57934] 00:24:14.021 bw ( KiB/s): min=46592, max=77664, per=91.14%, avg=67528.00, stdev=14484.01, samples=4 00:24:14.021 iops : min= 2912, max= 4854, avg=4220.50, stdev=905.25, samples=4 00:24:14.021 lat (msec) : 4=0.24%, 10=55.04%, 20=43.69%, 50=0.59%, 100=0.44% 00:24:14.021 cpu : usr=74.13%, sys=22.18%, ctx=26, majf=0, minf=1 00:24:14.021 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:24:14.021 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.021 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:14.021 issued rwts: total=16184,8460,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.021 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:14.021 00:24:14.021 Run status group 0 (all jobs): 00:24:14.021 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=253MiB (265MB), run=2007-2007msec 00:24:14.021 WRITE: bw=72.4MiB/s (75.9MB/s), 72.4MiB/s-72.4MiB/s (75.9MB/s-75.9MB/s), io=132MiB (139MB), run=1827-1827msec 00:24:14.021 15:47:53 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:14.021 15:47:53 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:24:14.021 15:47:53 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:24:14.021 15:47:53 -- host/fio.sh@51 -- # get_nvme_bdfs 00:24:14.021 15:47:53 -- common/autotest_common.sh@1498 -- # bdfs=() 00:24:14.021 15:47:53 -- common/autotest_common.sh@1498 -- # local bdfs 00:24:14.021 15:47:53 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:24:14.021 15:47:53 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:24:14.021 15:47:53 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:24:14.278 15:47:53 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:24:14.278 15:47:53 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:24:14.278 15:47:53 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:24:17.556 Nvme0n1 00:24:17.556 15:47:56 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:24:20.079 15:47:59 -- host/fio.sh@53 -- # ls_guid=fab280c1-bcaf-4a71-8d67-ad5e7fecb9e0 00:24:20.079 15:47:59 -- host/fio.sh@54 -- # get_lvs_free_mb fab280c1-bcaf-4a71-8d67-ad5e7fecb9e0 00:24:20.079 15:47:59 -- common/autotest_common.sh@1343 -- # local lvs_uuid=fab280c1-bcaf-4a71-8d67-ad5e7fecb9e0 00:24:20.079 15:47:59 -- common/autotest_common.sh@1344 -- # local lvs_info 00:24:20.079 15:47:59 -- common/autotest_common.sh@1345 -- # local fc 00:24:20.079 15:47:59 -- common/autotest_common.sh@1346 -- # local cs 00:24:20.079 15:47:59 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:20.359 15:47:59 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:24:20.359 { 00:24:20.359 "uuid": "fab280c1-bcaf-4a71-8d67-ad5e7fecb9e0", 00:24:20.359 "name": "lvs_0", 00:24:20.359 "base_bdev": "Nvme0n1", 00:24:20.359 "total_data_clusters": 930, 00:24:20.359 "free_clusters": 930, 00:24:20.359 "block_size": 512, 00:24:20.359 "cluster_size": 1073741824 00:24:20.359 } 00:24:20.359 ]' 00:24:20.359 15:47:59 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="fab280c1-bcaf-4a71-8d67-ad5e7fecb9e0") .free_clusters' 00:24:20.359 15:47:59 -- common/autotest_common.sh@1348 -- # fc=930 00:24:20.359 15:47:59 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="fab280c1-bcaf-4a71-8d67-ad5e7fecb9e0") .cluster_size' 00:24:20.359 15:47:59 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:24:20.359 15:47:59 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:24:20.359 15:47:59 -- common/autotest_common.sh@1353 -- # echo 952320 00:24:20.359 952320 00:24:20.359 15:47:59 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:24:20.924 02559107-8511-46bb-8f60-430e3bc27e62 00:24:20.924 15:48:00 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:24:21.180 15:48:00 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:24:21.437 15:48:00 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:21.693 15:48:00 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:21.693 15:48:00 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:21.693 15:48:00 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:21.693 15:48:00 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:21.693 15:48:00 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:21.693 15:48:00 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:21.693 15:48:00 -- common/autotest_common.sh@1320 -- # shift 00:24:21.693 15:48:00 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:21.693 15:48:00 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:21.693 15:48:00 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:21.693 15:48:00 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:21.693 15:48:00 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:21.693 15:48:00 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:21.693 15:48:00 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:21.693 15:48:00 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:21.693 15:48:00 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:21.693 15:48:00 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:21.693 15:48:00 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:21.693 15:48:00 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:21.693 15:48:00 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:21.693 15:48:00 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:21.693 15:48:00 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:21.984 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:21.984 fio-3.35 00:24:21.984 Starting 1 thread 00:24:21.984 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.508 00:24:24.508 test: (groupid=0, jobs=1): err= 0: pid=2205757: Wed Jul 10 15:48:03 2024 00:24:24.508 read: IOPS=6338, BW=24.8MiB/s (26.0MB/s)(49.7MiB/2008msec) 00:24:24.508 slat (usec): min=2, max=135, avg= 2.70, stdev= 2.22 00:24:24.508 clat (usec): min=809, max=171015, avg=11118.39, stdev=11356.77 00:24:24.508 lat (usec): min=812, max=171051, avg=11121.08, stdev=11357.00 00:24:24.508 clat percentiles (msec): 00:24:24.508 | 1.00th=[ 9], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:24:24.508 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:24:24.508 | 70.00th=[ 11], 80.00th=[ 11], 90.00th=[ 12], 95.00th=[ 12], 00:24:24.508 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:24:24.508 | 99.99th=[ 171] 00:24:24.508 bw ( KiB/s): min=17712, max=27984, per=99.86%, avg=25316.00, stdev=5071.03, samples=4 00:24:24.508 iops : min= 4428, max= 6996, avg=6329.00, stdev=1267.76, samples=4 00:24:24.508 write: IOPS=6334, BW=24.7MiB/s (25.9MB/s)(49.7MiB/2008msec); 0 zone resets 00:24:24.508 slat (nsec): min=2216, max=97328, avg=2812.53, stdev=1808.36 00:24:24.508 clat (usec): min=401, max=168936, avg=8908.62, stdev=10653.27 00:24:24.508 lat (usec): min=404, max=168944, avg=8911.43, stdev=10653.50 00:24:24.508 clat percentiles (msec): 00:24:24.508 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:24:24.508 | 30.00th=[ 8], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:24:24.508 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 10], 95.00th=[ 10], 00:24:24.508 | 99.00th=[ 11], 99.50th=[ 16], 99.90th=[ 169], 99.95th=[ 169], 00:24:24.508 | 99.99th=[ 169] 00:24:24.508 bw ( KiB/s): min=18728, max=27624, per=99.94%, avg=25322.00, stdev=4397.05, samples=4 00:24:24.508 iops : min= 4682, max= 6906, avg=6330.50, stdev=1099.26, samples=4 00:24:24.508 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:24:24.508 lat (msec) : 2=0.03%, 4=0.13%, 10=66.68%, 20=32.64%, 250=0.50% 00:24:24.508 cpu : usr=52.91%, sys=42.50%, ctx=107, majf=0, minf=5 00:24:24.508 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:24.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:24.508 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:24.508 issued rwts: total=12727,12719,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:24.508 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:24.508 00:24:24.508 Run status group 0 (all jobs): 00:24:24.508 READ: bw=24.8MiB/s (26.0MB/s), 24.8MiB/s-24.8MiB/s (26.0MB/s-26.0MB/s), io=49.7MiB (52.1MB), run=2008-2008msec 00:24:24.508 WRITE: bw=24.7MiB/s (25.9MB/s), 24.7MiB/s-24.7MiB/s (25.9MB/s-25.9MB/s), io=49.7MiB (52.1MB), run=2008-2008msec 00:24:24.508 15:48:03 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:24.508 15:48:03 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:24:25.881 15:48:04 -- host/fio.sh@64 -- # ls_nested_guid=e204bca4-7093-46e4-8ead-b12e11780cef 00:24:25.881 15:48:04 -- host/fio.sh@65 -- # get_lvs_free_mb e204bca4-7093-46e4-8ead-b12e11780cef 00:24:25.881 15:48:04 -- common/autotest_common.sh@1343 -- # local lvs_uuid=e204bca4-7093-46e4-8ead-b12e11780cef 00:24:25.881 15:48:04 -- common/autotest_common.sh@1344 -- # local lvs_info 00:24:25.881 15:48:04 -- common/autotest_common.sh@1345 -- # local fc 00:24:25.881 15:48:04 -- common/autotest_common.sh@1346 -- # local cs 00:24:25.881 15:48:04 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:24:25.881 15:48:05 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:24:25.881 { 00:24:25.881 "uuid": "fab280c1-bcaf-4a71-8d67-ad5e7fecb9e0", 00:24:25.881 "name": "lvs_0", 00:24:25.881 "base_bdev": "Nvme0n1", 00:24:25.881 "total_data_clusters": 930, 00:24:25.881 "free_clusters": 0, 00:24:25.881 "block_size": 512, 00:24:25.881 "cluster_size": 1073741824 00:24:25.881 }, 00:24:25.881 { 00:24:25.881 "uuid": "e204bca4-7093-46e4-8ead-b12e11780cef", 00:24:25.881 "name": "lvs_n_0", 00:24:25.881 "base_bdev": "02559107-8511-46bb-8f60-430e3bc27e62", 00:24:25.881 "total_data_clusters": 237847, 00:24:25.881 "free_clusters": 237847, 00:24:25.881 "block_size": 512, 00:24:25.881 "cluster_size": 4194304 00:24:25.881 } 00:24:25.881 ]' 00:24:25.881 15:48:05 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="e204bca4-7093-46e4-8ead-b12e11780cef") .free_clusters' 00:24:25.881 15:48:05 -- common/autotest_common.sh@1348 -- # fc=237847 00:24:25.881 15:48:05 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="e204bca4-7093-46e4-8ead-b12e11780cef") .cluster_size' 00:24:25.881 15:48:05 -- common/autotest_common.sh@1349 -- # cs=4194304 00:24:25.881 15:48:05 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:24:25.881 15:48:05 -- common/autotest_common.sh@1353 -- # echo 951388 00:24:25.881 951388 00:24:25.881 15:48:05 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:24:26.812 cffa2893-4f6f-4d02-af9c-5ce814af72a4 00:24:26.812 15:48:05 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:24:26.812 15:48:06 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:24:27.070 15:48:06 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:27.327 15:48:06 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:27.327 15:48:06 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:27.327 15:48:06 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:24:27.327 15:48:06 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:27.327 15:48:06 -- common/autotest_common.sh@1318 -- # local sanitizers 00:24:27.327 15:48:06 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:27.327 15:48:06 -- common/autotest_common.sh@1320 -- # shift 00:24:27.327 15:48:06 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:24:27.327 15:48:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:27.327 15:48:06 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:27.327 15:48:06 -- common/autotest_common.sh@1324 -- # grep libasan 00:24:27.327 15:48:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:27.327 15:48:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:27.327 15:48:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:27.327 15:48:06 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:24:27.327 15:48:06 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:27.327 15:48:06 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:24:27.327 15:48:06 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:24:27.327 15:48:06 -- common/autotest_common.sh@1324 -- # asan_lib= 00:24:27.328 15:48:06 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:24:27.328 15:48:06 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:27.328 15:48:06 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:27.585 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:27.585 fio-3.35 00:24:27.585 Starting 1 thread 00:24:27.585 EAL: No free 2048 kB hugepages reported on node 1 00:24:30.114 00:24:30.114 test: (groupid=0, jobs=1): err= 0: pid=2206569: Wed Jul 10 15:48:09 2024 00:24:30.114 read: IOPS=6195, BW=24.2MiB/s (25.4MB/s)(48.6MiB/2008msec) 00:24:30.114 slat (usec): min=2, max=130, avg= 2.77, stdev= 2.18 00:24:30.114 clat (usec): min=4437, max=18965, avg=11435.09, stdev=951.92 00:24:30.114 lat (usec): min=4456, max=18968, avg=11437.86, stdev=951.84 00:24:30.114 clat percentiles (usec): 00:24:30.114 | 1.00th=[ 9372], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:24:30.114 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:24:30.114 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:24:30.114 | 99.00th=[13566], 99.50th=[13829], 99.90th=[17433], 99.95th=[18744], 00:24:30.114 | 99.99th=[19006] 00:24:30.114 bw ( KiB/s): min=23384, max=25328, per=99.88%, avg=24750.00, stdev=917.12, samples=4 00:24:30.114 iops : min= 5846, max= 6332, avg=6187.50, stdev=229.28, samples=4 00:24:30.114 write: IOPS=6187, BW=24.2MiB/s (25.3MB/s)(48.5MiB/2008msec); 0 zone resets 00:24:30.114 slat (nsec): min=2162, max=99440, avg=2879.25, stdev=1706.52 00:24:30.114 clat (usec): min=2250, max=17401, avg=9101.87, stdev=842.74 00:24:30.114 lat (usec): min=2256, max=17404, avg=9104.75, stdev=842.73 00:24:30.114 clat percentiles (usec): 00:24:30.114 | 1.00th=[ 7177], 5.00th=[ 7832], 10.00th=[ 8094], 20.00th=[ 8455], 00:24:30.114 | 30.00th=[ 8717], 40.00th=[ 8848], 50.00th=[ 9110], 60.00th=[ 9241], 00:24:30.114 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10421], 00:24:30.114 | 99.00th=[11076], 99.50th=[11338], 99.90th=[13304], 99.95th=[15664], 00:24:30.114 | 99.99th=[17433] 00:24:30.114 bw ( KiB/s): min=24536, max=24832, per=99.90%, avg=24726.00, stdev=130.21, samples=4 00:24:30.114 iops : min= 6134, max= 6208, avg=6181.50, stdev=32.55, samples=4 00:24:30.114 lat (msec) : 4=0.04%, 10=46.17%, 20=53.79% 00:24:30.114 cpu : usr=55.98%, sys=39.09%, ctx=92, majf=0, minf=5 00:24:30.114 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:24:30.114 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:30.114 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:30.114 issued rwts: total=12440,12425,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:30.114 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:30.114 00:24:30.114 Run status group 0 (all jobs): 00:24:30.114 READ: bw=24.2MiB/s (25.4MB/s), 24.2MiB/s-24.2MiB/s (25.4MB/s-25.4MB/s), io=48.6MiB (51.0MB), run=2008-2008msec 00:24:30.114 WRITE: bw=24.2MiB/s (25.3MB/s), 24.2MiB/s-24.2MiB/s (25.3MB/s-25.3MB/s), io=48.5MiB (50.9MB), run=2008-2008msec 00:24:30.114 15:48:09 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:24:30.114 15:48:09 -- host/fio.sh@74 -- # sync 00:24:30.114 15:48:09 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:24:34.295 15:48:13 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:24:34.295 15:48:13 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:24:37.637 15:48:16 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:24:37.637 15:48:16 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:24:39.535 15:48:18 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:39.535 15:48:18 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:39.535 15:48:18 -- host/fio.sh@86 -- # nvmftestfini 00:24:39.535 15:48:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:39.535 15:48:18 -- nvmf/common.sh@116 -- # sync 00:24:39.535 15:48:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:39.535 15:48:18 -- nvmf/common.sh@119 -- # set +e 00:24:39.535 15:48:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:39.535 15:48:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:39.535 rmmod nvme_tcp 00:24:39.535 rmmod nvme_fabrics 00:24:39.535 rmmod nvme_keyring 00:24:39.535 15:48:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:39.535 15:48:18 -- nvmf/common.sh@123 -- # set -e 00:24:39.535 15:48:18 -- nvmf/common.sh@124 -- # return 0 00:24:39.535 15:48:18 -- nvmf/common.sh@477 -- # '[' -n 2203504 ']' 00:24:39.535 15:48:18 -- nvmf/common.sh@478 -- # killprocess 2203504 00:24:39.535 15:48:18 -- common/autotest_common.sh@926 -- # '[' -z 2203504 ']' 00:24:39.535 15:48:18 -- common/autotest_common.sh@930 -- # kill -0 2203504 00:24:39.535 15:48:18 -- common/autotest_common.sh@931 -- # uname 00:24:39.535 15:48:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:39.535 15:48:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2203504 00:24:39.535 15:48:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:39.535 15:48:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:39.535 15:48:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2203504' 00:24:39.535 killing process with pid 2203504 00:24:39.535 15:48:18 -- common/autotest_common.sh@945 -- # kill 2203504 00:24:39.535 15:48:18 -- common/autotest_common.sh@950 -- # wait 2203504 00:24:39.535 15:48:18 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:39.535 15:48:18 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:39.535 15:48:18 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:39.535 15:48:18 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:39.535 15:48:18 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:39.535 15:48:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:39.535 15:48:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:39.536 15:48:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.066 15:48:20 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:42.066 00:24:42.066 real 0m37.495s 00:24:42.066 user 2m22.777s 00:24:42.066 sys 0m7.450s 00:24:42.066 15:48:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:42.066 15:48:20 -- common/autotest_common.sh@10 -- # set +x 00:24:42.066 ************************************ 00:24:42.066 END TEST nvmf_fio_host 00:24:42.066 ************************************ 00:24:42.067 15:48:20 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:42.067 15:48:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:42.067 15:48:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:42.067 15:48:20 -- common/autotest_common.sh@10 -- # set +x 00:24:42.067 ************************************ 00:24:42.067 START TEST nvmf_failover 00:24:42.067 ************************************ 00:24:42.067 15:48:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:42.067 * Looking for test storage... 00:24:42.067 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:42.067 15:48:20 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.067 15:48:20 -- nvmf/common.sh@7 -- # uname -s 00:24:42.067 15:48:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.067 15:48:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.067 15:48:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.067 15:48:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.067 15:48:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.067 15:48:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.067 15:48:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.067 15:48:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.067 15:48:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.067 15:48:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.067 15:48:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:42.067 15:48:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:42.067 15:48:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.067 15:48:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.067 15:48:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.067 15:48:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:42.067 15:48:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.067 15:48:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.067 15:48:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.067 15:48:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.067 15:48:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.067 15:48:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.067 15:48:21 -- paths/export.sh@5 -- # export PATH 00:24:42.067 15:48:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.067 15:48:21 -- nvmf/common.sh@46 -- # : 0 00:24:42.067 15:48:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:42.067 15:48:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:42.067 15:48:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:42.067 15:48:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.067 15:48:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.067 15:48:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:42.067 15:48:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:42.067 15:48:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:42.067 15:48:21 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:42.067 15:48:21 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:42.067 15:48:21 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:42.067 15:48:21 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:42.067 15:48:21 -- host/failover.sh@18 -- # nvmftestinit 00:24:42.067 15:48:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:42.067 15:48:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.067 15:48:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:42.067 15:48:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:42.067 15:48:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:42.067 15:48:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.067 15:48:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:42.067 15:48:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.067 15:48:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:42.067 15:48:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:42.067 15:48:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:42.067 15:48:21 -- common/autotest_common.sh@10 -- # set +x 00:24:43.967 15:48:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:43.967 15:48:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:43.967 15:48:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:43.967 15:48:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:43.967 15:48:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:43.967 15:48:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:43.967 15:48:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:43.967 15:48:23 -- nvmf/common.sh@294 -- # net_devs=() 00:24:43.967 15:48:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:43.967 15:48:23 -- nvmf/common.sh@295 -- # e810=() 00:24:43.967 15:48:23 -- nvmf/common.sh@295 -- # local -ga e810 00:24:43.967 15:48:23 -- nvmf/common.sh@296 -- # x722=() 00:24:43.967 15:48:23 -- nvmf/common.sh@296 -- # local -ga x722 00:24:43.967 15:48:23 -- nvmf/common.sh@297 -- # mlx=() 00:24:43.967 15:48:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:43.967 15:48:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.967 15:48:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.967 15:48:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.967 15:48:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.967 15:48:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.967 15:48:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.967 15:48:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.967 15:48:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.967 15:48:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.967 15:48:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.967 15:48:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.967 15:48:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:43.967 15:48:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:43.967 15:48:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:43.967 15:48:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:43.967 15:48:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:43.967 15:48:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:43.967 15:48:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:43.967 15:48:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:43.967 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:43.967 15:48:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:43.967 15:48:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:43.967 15:48:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.967 15:48:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.967 15:48:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:43.967 15:48:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:43.967 15:48:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:43.967 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:43.967 15:48:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:43.967 15:48:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:43.967 15:48:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.967 15:48:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.967 15:48:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:43.967 15:48:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:43.967 15:48:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:43.967 15:48:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:43.967 15:48:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:43.967 15:48:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.967 15:48:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:43.967 15:48:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.967 15:48:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:43.967 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:43.968 15:48:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.968 15:48:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:43.968 15:48:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.968 15:48:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:43.968 15:48:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.968 15:48:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:43.968 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:43.968 15:48:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.968 15:48:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:43.968 15:48:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:43.968 15:48:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:43.968 15:48:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:43.968 15:48:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:43.968 15:48:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.968 15:48:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.968 15:48:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:43.968 15:48:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:43.968 15:48:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:43.968 15:48:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:43.968 15:48:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:43.968 15:48:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:43.968 15:48:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.968 15:48:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:43.968 15:48:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:43.968 15:48:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:43.968 15:48:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:43.968 15:48:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:43.968 15:48:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:43.968 15:48:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:43.968 15:48:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:43.968 15:48:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:43.968 15:48:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:43.968 15:48:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:43.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:24:43.968 00:24:43.968 --- 10.0.0.2 ping statistics --- 00:24:43.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.968 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:24:43.968 15:48:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:43.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:24:43.968 00:24:43.968 --- 10.0.0.1 ping statistics --- 00:24:43.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.968 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:24:43.968 15:48:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.968 15:48:23 -- nvmf/common.sh@410 -- # return 0 00:24:43.968 15:48:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:43.968 15:48:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.968 15:48:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:43.968 15:48:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:43.968 15:48:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.968 15:48:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:43.968 15:48:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:43.968 15:48:23 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:43.968 15:48:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:43.968 15:48:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:43.968 15:48:23 -- common/autotest_common.sh@10 -- # set +x 00:24:43.968 15:48:23 -- nvmf/common.sh@469 -- # nvmfpid=2210349 00:24:43.968 15:48:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:43.968 15:48:23 -- nvmf/common.sh@470 -- # waitforlisten 2210349 00:24:43.968 15:48:23 -- common/autotest_common.sh@819 -- # '[' -z 2210349 ']' 00:24:43.968 15:48:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.968 15:48:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:43.968 15:48:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.968 15:48:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:43.968 15:48:23 -- common/autotest_common.sh@10 -- # set +x 00:24:43.968 [2024-07-10 15:48:23.225361] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:43.968 [2024-07-10 15:48:23.225455] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.968 EAL: No free 2048 kB hugepages reported on node 1 00:24:43.968 [2024-07-10 15:48:23.294304] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:44.225 [2024-07-10 15:48:23.412590] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:44.225 [2024-07-10 15:48:23.412764] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:44.225 [2024-07-10 15:48:23.412797] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:44.225 [2024-07-10 15:48:23.412812] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:44.225 [2024-07-10 15:48:23.412909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:44.225 [2024-07-10 15:48:23.414445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:44.226 [2024-07-10 15:48:23.414456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.157 15:48:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:45.157 15:48:24 -- common/autotest_common.sh@852 -- # return 0 00:24:45.157 15:48:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:45.157 15:48:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:45.157 15:48:24 -- common/autotest_common.sh@10 -- # set +x 00:24:45.157 15:48:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:45.157 15:48:24 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:45.158 [2024-07-10 15:48:24.440168] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:45.158 15:48:24 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:45.415 Malloc0 00:24:45.416 15:48:24 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:45.673 15:48:24 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:45.931 15:48:25 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.189 [2024-07-10 15:48:25.454723] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.189 15:48:25 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:46.447 [2024-07-10 15:48:25.699463] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:46.447 15:48:25 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:46.705 [2024-07-10 15:48:25.924208] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:46.705 15:48:25 -- host/failover.sh@31 -- # bdevperf_pid=2210767 00:24:46.705 15:48:25 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:46.705 15:48:25 -- host/failover.sh@34 -- # waitforlisten 2210767 /var/tmp/bdevperf.sock 00:24:46.705 15:48:25 -- common/autotest_common.sh@819 -- # '[' -z 2210767 ']' 00:24:46.705 15:48:25 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:46.705 15:48:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:46.705 15:48:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:46.705 15:48:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:46.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:46.705 15:48:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:46.705 15:48:25 -- common/autotest_common.sh@10 -- # set +x 00:24:47.638 15:48:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:47.638 15:48:26 -- common/autotest_common.sh@852 -- # return 0 00:24:47.638 15:48:26 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:48.203 NVMe0n1 00:24:48.203 15:48:27 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:48.461 00:24:48.461 15:48:27 -- host/failover.sh@39 -- # run_test_pid=2210936 00:24:48.461 15:48:27 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:48.719 15:48:27 -- host/failover.sh@41 -- # sleep 1 00:24:49.652 15:48:28 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:49.910 [2024-07-10 15:48:29.073281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073445] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073498] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073523] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073535] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073548] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073561] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073573] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073643] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073714] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073785] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073808] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073821] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.910 [2024-07-10 15:48:29.073883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.911 [2024-07-10 15:48:29.073894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.911 [2024-07-10 15:48:29.073905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.911 [2024-07-10 15:48:29.073917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.911 [2024-07-10 15:48:29.073928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.911 [2024-07-10 15:48:29.073942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.911 [2024-07-10 15:48:29.073954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.911 [2024-07-10 15:48:29.073966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.911 [2024-07-10 15:48:29.073977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.911 [2024-07-10 15:48:29.073988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.911 [2024-07-10 15:48:29.073999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.911 [2024-07-10 15:48:29.074010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.911 [2024-07-10 15:48:29.074021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2370 is same with the state(5) to be set 00:24:49.911 15:48:29 -- host/failover.sh@45 -- # sleep 3 00:24:53.188 15:48:32 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:53.188 00:24:53.188 15:48:32 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:53.446 [2024-07-10 15:48:32.625130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.446 [2024-07-10 15:48:32.625198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.446 [2024-07-10 15:48:32.625213] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.446 [2024-07-10 15:48:32.625225] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.446 [2024-07-10 15:48:32.625237] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.446 [2024-07-10 15:48:32.625248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.446 [2024-07-10 15:48:32.625275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.446 [2024-07-10 15:48:32.625288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.446 [2024-07-10 15:48:32.625300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625388] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625529] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625555] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625628] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625696] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625709] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625722] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625735] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625858] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 [2024-07-10 15:48:32.625881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbc2b80 is same with the state(5) to be set 00:24:53.447 15:48:32 -- host/failover.sh@50 -- # sleep 3 00:24:56.719 15:48:35 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:56.719 [2024-07-10 15:48:35.886082] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.719 15:48:35 -- host/failover.sh@55 -- # sleep 1 00:24:57.649 15:48:36 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:57.909 [2024-07-10 15:48:37.129378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129469] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129497] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129537] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129587] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129599] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129612] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129624] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129635] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129647] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129665] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129816] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129828] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129851] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129942] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.129989] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.130001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.130012] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.130027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.909 [2024-07-10 15:48:37.130039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130084] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130156] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130220] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130280] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130377] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130402] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130419] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130451] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 [2024-07-10 15:48:37.130499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x96d2f0 is same with the state(5) to be set 00:24:57.910 15:48:37 -- host/failover.sh@59 -- # wait 2210936 00:25:04.473 0 00:25:04.473 15:48:42 -- host/failover.sh@61 -- # killprocess 2210767 00:25:04.473 15:48:42 -- common/autotest_common.sh@926 -- # '[' -z 2210767 ']' 00:25:04.473 15:48:43 -- common/autotest_common.sh@930 -- # kill -0 2210767 00:25:04.473 15:48:43 -- common/autotest_common.sh@931 -- # uname 00:25:04.473 15:48:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:04.473 15:48:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2210767 00:25:04.473 15:48:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:04.473 15:48:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:04.473 15:48:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2210767' 00:25:04.473 killing process with pid 2210767 00:25:04.473 15:48:43 -- common/autotest_common.sh@945 -- # kill 2210767 00:25:04.473 15:48:43 -- common/autotest_common.sh@950 -- # wait 2210767 00:25:04.473 15:48:43 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:04.473 [2024-07-10 15:48:25.983494] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:04.473 [2024-07-10 15:48:25.983577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2210767 ] 00:25:04.473 EAL: No free 2048 kB hugepages reported on node 1 00:25:04.473 [2024-07-10 15:48:26.044636] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.473 [2024-07-10 15:48:26.151813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.473 Running I/O for 15 seconds... 00:25:04.473 [2024-07-10 15:48:29.074187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.474 [2024-07-10 15:48:29.074305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.074336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.474 [2024-07-10 15:48:29.074358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.074374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.474 [2024-07-10 15:48:29.074387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.074403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.474 [2024-07-10 15:48:29.074422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.074444] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165abd0 is same with the state(5) to be set 00:25:04.474 [2024-07-10 15:48:29.074507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.074529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.074555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.074571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.074587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.074601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.074617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:115712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.074630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.074646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:115720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.074661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.074676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.074689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.074705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:115752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.074741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.074757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:115760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.074771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.074801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.074813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.074828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.074841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.074855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.074868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.074882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.074895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.074925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.074938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.074953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.074966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.074980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.074993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:115816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:115824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:115864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:115872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.474 [2024-07-10 15:48:29.075219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:115880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.474 [2024-07-10 15:48:29.075247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:115888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.474 [2024-07-10 15:48:29.075302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:115904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:115920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.474 [2024-07-10 15:48:29.075390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:115928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.474 [2024-07-10 15:48:29.075420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:115352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:115424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:115936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:115944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.474 [2024-07-10 15:48:29.075713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:115960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:115968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.474 [2024-07-10 15:48:29.075800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.474 [2024-07-10 15:48:29.075827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:115992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.474 [2024-07-10 15:48:29.075889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.474 [2024-07-10 15:48:29.075945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:116016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.075972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.075988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:116024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.474 [2024-07-10 15:48:29.076001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.076015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.474 [2024-07-10 15:48:29.076028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.076043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:116040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.474 [2024-07-10 15:48:29.076056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.076071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.076085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.076099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:116056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.076112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.076127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:116064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.474 [2024-07-10 15:48:29.076140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.076155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:116072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.474 [2024-07-10 15:48:29.076168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.076183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:116080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.076199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.076214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.474 [2024-07-10 15:48:29.076228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.076243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:116096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.076257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.076271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:116104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.076284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.076308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:115456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.076322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.076336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.076349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.076364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:115504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.076377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.076392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.076420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.076444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:115520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.474 [2024-07-10 15:48:29.076459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.474 [2024-07-10 15:48:29.076476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:115536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.076490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.076505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:115552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.076519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.076534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.076548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.076563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:116112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.076577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.076592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.076613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.076629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.076643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.076658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.076672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.076687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.076701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.076716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.076745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.076761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.076774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.076788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.076801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.076821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:116176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.076835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.076849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.076862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.076878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:116192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.076891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.076905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:116200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.076919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.076934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.076947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.076962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.076975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.076994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.077007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.077036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:116240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.077063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:116248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.077091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.077119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.077147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.077175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:116280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.077202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.077230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:115640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.077258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:115648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.077291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.077318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.077351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.077380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.077407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.077464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.077497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:116296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.077526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:116304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.077555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:116312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.077583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.077612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:116328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.077641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.077670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:116344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.077699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:116352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.077753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:116360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.077787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:116368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.077821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.077849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:116384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.077878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.077905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:116400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.077933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.077961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.077976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.077988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.078003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.078016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.078031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:116432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.078044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.078059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:116440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.078072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.078087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:116448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.475 [2024-07-10 15:48:29.078101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.078115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.078132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.078148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.078161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.078175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.078189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.078203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:115800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.078216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.078231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:115808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.078243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.078265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:115832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.078279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.078294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:115840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.078308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.078323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:115848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:29.078336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.078350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1679690 is same with the state(5) to be set 00:25:04.475 [2024-07-10 15:48:29.078366] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.475 [2024-07-10 15:48:29.078377] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.475 [2024-07-10 15:48:29.078389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115856 len:8 PRP1 0x0 PRP2 0x0 00:25:04.475 [2024-07-10 15:48:29.078401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:29.078498] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1679690 was disconnected and freed. reset controller. 00:25:04.475 [2024-07-10 15:48:29.078525] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:04.475 [2024-07-10 15:48:29.078543] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:04.475 [2024-07-10 15:48:29.080812] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:04.475 [2024-07-10 15:48:29.080850] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165abd0 (9): Bad file descriptor 00:25:04.475 [2024-07-10 15:48:29.111777] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:04.475 [2024-07-10 15:48:32.626071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:32.626125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:32.626157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:32.626174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.475 [2024-07-10 15:48:32.626191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.475 [2024-07-10 15:48:32.626205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:105624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.476 [2024-07-10 15:48:32.626655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:105744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.476 [2024-07-10 15:48:32.626727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:105760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.476 [2024-07-10 15:48:32.626783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.476 [2024-07-10 15:48:32.626891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.626977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.626992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:105808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.476 [2024-07-10 15:48:32.627176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.476 [2024-07-10 15:48:32.627258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:105864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.476 [2024-07-10 15:48:32.627481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.476 [2024-07-10 15:48:32.627509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.476 [2024-07-10 15:48:32.627537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:105952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.627974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.627988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.628000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.628015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.628031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.628045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.628064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.628079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.476 [2024-07-10 15:48:32.628092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.628106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:106008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.628118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.628132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.476 [2024-07-10 15:48:32.628145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.628160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.476 [2024-07-10 15:48:32.628173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.628188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:106032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.628200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.628215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.476 [2024-07-10 15:48:32.628227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.628241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.628254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.476 [2024-07-10 15:48:32.628268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:106056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.476 [2024-07-10 15:48:32.628281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.628308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.628335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:106080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.628361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:106088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.628393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.628420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.628474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:106112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.628502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.628535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.628563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.628591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:106144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.628619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.628647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.628675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.628702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.628745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.628777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.628806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.628833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.628859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.628886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:106160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.628913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:106168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.628940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:106176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.628967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.628981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:106184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.628999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:106192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.629026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:106200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.629053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:106208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.629081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.629109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:106224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.629139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:106232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.629167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:106240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.629195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:106248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.629221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.629249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:106264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.629276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:106272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.629303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:106280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.629330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:106288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.629358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:106296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.629384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.629412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:106312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.629469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:106320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.629502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.629531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:106336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.629558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:106344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.629586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:106352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.629614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:106360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.629642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:106368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.477 [2024-07-10 15:48:32.629669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.629697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.629726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:105672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.629767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:105680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.629794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.629821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.629848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:32.629879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.629893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1667050 is same with the state(5) to be set 00:25:04.477 [2024-07-10 15:48:32.629908] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.477 [2024-07-10 15:48:32.629919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.477 [2024-07-10 15:48:32.629949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105728 len:8 PRP1 0x0 PRP2 0x0 00:25:04.477 [2024-07-10 15:48:32.629964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.630027] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1667050 was disconnected and freed. reset controller. 00:25:04.477 [2024-07-10 15:48:32.630046] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:25:04.477 [2024-07-10 15:48:32.630093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.477 [2024-07-10 15:48:32.630114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.630131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.477 [2024-07-10 15:48:32.630218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.630235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.477 [2024-07-10 15:48:32.630250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.630266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.477 [2024-07-10 15:48:32.630280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:32.630295] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:04.477 [2024-07-10 15:48:32.630349] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165abd0 (9): Bad file descriptor 00:25:04.477 [2024-07-10 15:48:32.632627] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:04.477 [2024-07-10 15:48:32.663119] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:04.477 [2024-07-10 15:48:37.128542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.477 [2024-07-10 15:48:37.128599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:37.128617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.477 [2024-07-10 15:48:37.128632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:37.128647] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.477 [2024-07-10 15:48:37.128661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:37.128675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:04.477 [2024-07-10 15:48:37.128697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:37.128712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x165abd0 is same with the state(5) to be set 00:25:04.477 [2024-07-10 15:48:37.130657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:37.130684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:37.130711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:62304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:37.130730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:37.130763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:62312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:37.130777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:37.130792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:62320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:37.130806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.477 [2024-07-10 15:48:37.130821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:62328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.477 [2024-07-10 15:48:37.130835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.130850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:62336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.130864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.130878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:62360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.130892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.130923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.130936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.130951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:62392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.130964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.130978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:62400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.130991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:62408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:62416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:62424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.478 [2024-07-10 15:48:37.131359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.478 [2024-07-10 15:48:37.131387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.478 [2024-07-10 15:48:37.131445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:62472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:62480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:62488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:62496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.478 [2024-07-10 15:48:37.131590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:62512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:62056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:62064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:62072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:62520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:62528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:62536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:62544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.131980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.131994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.478 [2024-07-10 15:48:37.132007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.478 [2024-07-10 15:48:37.132035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:62568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:62576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.478 [2024-07-10 15:48:37.132116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:62592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.478 [2024-07-10 15:48:37.132175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.478 [2024-07-10 15:48:37.132203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:62616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:62624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:62632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:62648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:62656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.478 [2024-07-10 15:48:37.132394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:62672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.478 [2024-07-10 15:48:37.132485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:62120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:62176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:62200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:62224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:62232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:62248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:62688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.478 [2024-07-10 15:48:37.132771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.478 [2024-07-10 15:48:37.132814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.478 [2024-07-10 15:48:37.132841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.478 [2024-07-10 15:48:37.132868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.478 [2024-07-10 15:48:37.132925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.478 [2024-07-10 15:48:37.132953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.132980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.132994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:62760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.478 [2024-07-10 15:48:37.133006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.478 [2024-07-10 15:48:37.133020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.478 [2024-07-10 15:48:37.133033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:62776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-07-10 15:48:37.133060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:62784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-07-10 15:48:37.133088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:62824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-07-10 15:48:37.133246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:62864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-07-10 15:48:37.133393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:62880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-07-10 15:48:37.133486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:62920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:62952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-07-10 15:48:37.133793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.133907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-07-10 15:48:37.133936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:63000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-07-10 15:48:37.133965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.133980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:63008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-07-10 15:48:37.133993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.134009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.134022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.134037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-07-10 15:48:37.134051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.134066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:63032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-07-10 15:48:37.134079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.134094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:63040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.134116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.134132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.134146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.134161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.134174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.134190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:63064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.134203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.134218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.134231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.134246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:63080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.134260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.134275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:63088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-07-10 15:48:37.134289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.134304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:63096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:04.479 [2024-07-10 15:48:37.134318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.134333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:62272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-07-10 15:48:37.134346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.134361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:62280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-07-10 15:48:37.134375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.134390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-07-10 15:48:37.134436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.134455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-07-10 15:48:37.134470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.134486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:62352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-07-10 15:48:37.134500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.134520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:62368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-07-10 15:48:37.134534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.134550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:62376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:04.479 [2024-07-10 15:48:37.134564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.134580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x167b530 is same with the state(5) to be set 00:25:04.479 [2024-07-10 15:48:37.134596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:04.479 [2024-07-10 15:48:37.134608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:04.479 [2024-07-10 15:48:37.134625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62440 len:8 PRP1 0x0 PRP2 0x0 00:25:04.479 [2024-07-10 15:48:37.134638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:04.479 [2024-07-10 15:48:37.134699] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x167b530 was disconnected and freed. reset controller. 00:25:04.479 [2024-07-10 15:48:37.134718] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:25:04.479 [2024-07-10 15:48:37.134757] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:04.479 [2024-07-10 15:48:37.136962] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:04.479 [2024-07-10 15:48:37.137001] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x165abd0 (9): Bad file descriptor 00:25:04.479 [2024-07-10 15:48:37.166318] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:04.479 00:25:04.479 Latency(us) 00:25:04.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.479 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:04.479 Verification LBA range: start 0x0 length 0x4000 00:25:04.479 NVMe0n1 : 15.01 12896.62 50.38 327.66 0.00 9661.91 843.47 15437.37 00:25:04.479 =================================================================================================================== 00:25:04.479 Total : 12896.62 50.38 327.66 0.00 9661.91 843.47 15437.37 00:25:04.479 Received shutdown signal, test time was about 15.000000 seconds 00:25:04.479 00:25:04.479 Latency(us) 00:25:04.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.479 =================================================================================================================== 00:25:04.479 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.479 15:48:43 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:25:04.479 15:48:43 -- host/failover.sh@65 -- # count=3 00:25:04.479 15:48:43 -- host/failover.sh@67 -- # (( count != 3 )) 00:25:04.479 15:48:43 -- host/failover.sh@73 -- # bdevperf_pid=2212822 00:25:04.479 15:48:43 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:25:04.479 15:48:43 -- host/failover.sh@75 -- # waitforlisten 2212822 /var/tmp/bdevperf.sock 00:25:04.479 15:48:43 -- common/autotest_common.sh@819 -- # '[' -z 2212822 ']' 00:25:04.479 15:48:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:04.479 15:48:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:04.479 15:48:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:04.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:04.479 15:48:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:04.479 15:48:43 -- common/autotest_common.sh@10 -- # set +x 00:25:05.043 15:48:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:05.043 15:48:44 -- common/autotest_common.sh@852 -- # return 0 00:25:05.043 15:48:44 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:05.302 [2024-07-10 15:48:44.495380] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:05.302 15:48:44 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:05.560 [2024-07-10 15:48:44.724022] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:05.560 15:48:44 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:06.126 NVMe0n1 00:25:06.126 15:48:45 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:06.384 00:25:06.384 15:48:45 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:06.949 00:25:06.949 15:48:46 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:06.949 15:48:46 -- host/failover.sh@82 -- # grep -q NVMe0 00:25:07.206 15:48:46 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:07.206 15:48:46 -- host/failover.sh@87 -- # sleep 3 00:25:10.484 15:48:49 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:10.484 15:48:49 -- host/failover.sh@88 -- # grep -q NVMe0 00:25:10.484 15:48:49 -- host/failover.sh@90 -- # run_test_pid=2213641 00:25:10.484 15:48:49 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:10.484 15:48:49 -- host/failover.sh@92 -- # wait 2213641 00:25:11.857 0 00:25:11.857 15:48:50 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:11.857 [2024-07-10 15:48:43.320782] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:11.857 [2024-07-10 15:48:43.320881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2212822 ] 00:25:11.857 EAL: No free 2048 kB hugepages reported on node 1 00:25:11.857 [2024-07-10 15:48:43.379389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:11.857 [2024-07-10 15:48:43.482676] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:11.857 [2024-07-10 15:48:46.538694] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:25:11.857 [2024-07-10 15:48:46.538785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.857 [2024-07-10 15:48:46.538822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.857 [2024-07-10 15:48:46.538841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.857 [2024-07-10 15:48:46.538855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.857 [2024-07-10 15:48:46.538868] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.857 [2024-07-10 15:48:46.538882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.857 [2024-07-10 15:48:46.538896] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:11.857 [2024-07-10 15:48:46.538984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:11.857 [2024-07-10 15:48:46.539004] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:11.857 [2024-07-10 15:48:46.539045] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:11.857 [2024-07-10 15:48:46.539077] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd06bd0 (9): Bad file descriptor 00:25:11.857 [2024-07-10 15:48:46.640592] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:11.857 Running I/O for 1 seconds... 00:25:11.857 00:25:11.858 Latency(us) 00:25:11.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:11.858 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:11.858 Verification LBA range: start 0x0 length 0x4000 00:25:11.858 NVMe0n1 : 1.01 12985.63 50.73 0.00 0.00 9812.31 1201.49 11019.76 00:25:11.858 =================================================================================================================== 00:25:11.858 Total : 12985.63 50.73 0.00 0.00 9812.31 1201.49 11019.76 00:25:11.858 15:48:50 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:11.858 15:48:50 -- host/failover.sh@95 -- # grep -q NVMe0 00:25:11.858 15:48:51 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:12.115 15:48:51 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:12.115 15:48:51 -- host/failover.sh@99 -- # grep -q NVMe0 00:25:12.373 15:48:51 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:12.630 15:48:51 -- host/failover.sh@101 -- # sleep 3 00:25:15.913 15:48:54 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:15.913 15:48:54 -- host/failover.sh@103 -- # grep -q NVMe0 00:25:15.913 15:48:55 -- host/failover.sh@108 -- # killprocess 2212822 00:25:15.913 15:48:55 -- common/autotest_common.sh@926 -- # '[' -z 2212822 ']' 00:25:15.913 15:48:55 -- common/autotest_common.sh@930 -- # kill -0 2212822 00:25:15.913 15:48:55 -- common/autotest_common.sh@931 -- # uname 00:25:15.913 15:48:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:15.913 15:48:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2212822 00:25:15.913 15:48:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:15.913 15:48:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:15.913 15:48:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2212822' 00:25:15.913 killing process with pid 2212822 00:25:15.913 15:48:55 -- common/autotest_common.sh@945 -- # kill 2212822 00:25:15.913 15:48:55 -- common/autotest_common.sh@950 -- # wait 2212822 00:25:16.172 15:48:55 -- host/failover.sh@110 -- # sync 00:25:16.172 15:48:55 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:16.504 15:48:55 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:16.504 15:48:55 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:16.504 15:48:55 -- host/failover.sh@116 -- # nvmftestfini 00:25:16.504 15:48:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:16.504 15:48:55 -- nvmf/common.sh@116 -- # sync 00:25:16.504 15:48:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:16.504 15:48:55 -- nvmf/common.sh@119 -- # set +e 00:25:16.504 15:48:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:16.504 15:48:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:16.504 rmmod nvme_tcp 00:25:16.504 rmmod nvme_fabrics 00:25:16.504 rmmod nvme_keyring 00:25:16.504 15:48:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:16.504 15:48:55 -- nvmf/common.sh@123 -- # set -e 00:25:16.504 15:48:55 -- nvmf/common.sh@124 -- # return 0 00:25:16.504 15:48:55 -- nvmf/common.sh@477 -- # '[' -n 2210349 ']' 00:25:16.504 15:48:55 -- nvmf/common.sh@478 -- # killprocess 2210349 00:25:16.504 15:48:55 -- common/autotest_common.sh@926 -- # '[' -z 2210349 ']' 00:25:16.504 15:48:55 -- common/autotest_common.sh@930 -- # kill -0 2210349 00:25:16.504 15:48:55 -- common/autotest_common.sh@931 -- # uname 00:25:16.504 15:48:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:16.504 15:48:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2210349 00:25:16.504 15:48:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:16.504 15:48:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:16.504 15:48:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2210349' 00:25:16.504 killing process with pid 2210349 00:25:16.504 15:48:55 -- common/autotest_common.sh@945 -- # kill 2210349 00:25:16.504 15:48:55 -- common/autotest_common.sh@950 -- # wait 2210349 00:25:16.771 15:48:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:16.771 15:48:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:16.771 15:48:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:16.771 15:48:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:16.771 15:48:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:16.771 15:48:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.771 15:48:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:16.771 15:48:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.303 15:48:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:19.303 00:25:19.303 real 0m37.184s 00:25:19.303 user 2m10.766s 00:25:19.303 sys 0m6.353s 00:25:19.303 15:48:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:19.303 15:48:58 -- common/autotest_common.sh@10 -- # set +x 00:25:19.303 ************************************ 00:25:19.303 END TEST nvmf_failover 00:25:19.303 ************************************ 00:25:19.303 15:48:58 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:19.303 15:48:58 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:19.303 15:48:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:19.303 15:48:58 -- common/autotest_common.sh@10 -- # set +x 00:25:19.303 ************************************ 00:25:19.303 START TEST nvmf_discovery 00:25:19.303 ************************************ 00:25:19.303 15:48:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:19.303 * Looking for test storage... 00:25:19.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:19.303 15:48:58 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:19.303 15:48:58 -- nvmf/common.sh@7 -- # uname -s 00:25:19.303 15:48:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.303 15:48:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.303 15:48:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.303 15:48:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.303 15:48:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.303 15:48:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.303 15:48:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.303 15:48:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.303 15:48:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.303 15:48:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.303 15:48:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:19.303 15:48:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:19.303 15:48:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.303 15:48:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.303 15:48:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:19.303 15:48:58 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.303 15:48:58 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.303 15:48:58 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.303 15:48:58 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.303 15:48:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.303 15:48:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.303 15:48:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.303 15:48:58 -- paths/export.sh@5 -- # export PATH 00:25:19.303 15:48:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.303 15:48:58 -- nvmf/common.sh@46 -- # : 0 00:25:19.303 15:48:58 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:19.303 15:48:58 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:19.303 15:48:58 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:19.303 15:48:58 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.303 15:48:58 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.303 15:48:58 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:19.303 15:48:58 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:19.303 15:48:58 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:19.303 15:48:58 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:19.303 15:48:58 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:19.303 15:48:58 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:19.303 15:48:58 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:19.303 15:48:58 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:19.303 15:48:58 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:19.303 15:48:58 -- host/discovery.sh@25 -- # nvmftestinit 00:25:19.303 15:48:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:19.303 15:48:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.303 15:48:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:19.303 15:48:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:19.303 15:48:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:19.303 15:48:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.303 15:48:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:19.303 15:48:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.303 15:48:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:19.303 15:48:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:19.303 15:48:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:19.303 15:48:58 -- common/autotest_common.sh@10 -- # set +x 00:25:21.205 15:49:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:21.205 15:49:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:21.205 15:49:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:21.205 15:49:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:21.205 15:49:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:21.205 15:49:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:21.205 15:49:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:21.205 15:49:00 -- nvmf/common.sh@294 -- # net_devs=() 00:25:21.205 15:49:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:21.205 15:49:00 -- nvmf/common.sh@295 -- # e810=() 00:25:21.205 15:49:00 -- nvmf/common.sh@295 -- # local -ga e810 00:25:21.205 15:49:00 -- nvmf/common.sh@296 -- # x722=() 00:25:21.205 15:49:00 -- nvmf/common.sh@296 -- # local -ga x722 00:25:21.205 15:49:00 -- nvmf/common.sh@297 -- # mlx=() 00:25:21.205 15:49:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:21.205 15:49:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.205 15:49:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.205 15:49:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.205 15:49:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.205 15:49:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.205 15:49:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.205 15:49:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.205 15:49:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.205 15:49:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.205 15:49:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.205 15:49:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.205 15:49:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:21.205 15:49:00 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:21.205 15:49:00 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:21.205 15:49:00 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:21.205 15:49:00 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:21.205 15:49:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:21.205 15:49:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:21.205 15:49:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:21.205 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:21.205 15:49:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:21.205 15:49:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:21.205 15:49:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.205 15:49:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.205 15:49:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:21.205 15:49:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:21.205 15:49:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:21.205 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:21.205 15:49:00 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:21.205 15:49:00 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:21.205 15:49:00 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.205 15:49:00 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.205 15:49:00 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:21.205 15:49:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:21.205 15:49:00 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:21.205 15:49:00 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:21.205 15:49:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:21.205 15:49:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.205 15:49:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:21.205 15:49:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.205 15:49:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:21.205 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:21.205 15:49:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.205 15:49:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:21.205 15:49:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.205 15:49:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:21.205 15:49:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.205 15:49:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:21.205 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:21.205 15:49:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.205 15:49:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:21.205 15:49:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:21.205 15:49:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:21.205 15:49:00 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:21.205 15:49:00 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:21.205 15:49:00 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.205 15:49:00 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.206 15:49:00 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.206 15:49:00 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:21.206 15:49:00 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.206 15:49:00 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.206 15:49:00 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:21.206 15:49:00 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.206 15:49:00 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.206 15:49:00 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:21.206 15:49:00 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:21.206 15:49:00 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.206 15:49:00 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.206 15:49:00 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.206 15:49:00 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.206 15:49:00 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:21.206 15:49:00 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.206 15:49:00 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.206 15:49:00 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.206 15:49:00 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:21.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:25:21.206 00:25:21.206 --- 10.0.0.2 ping statistics --- 00:25:21.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.206 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:25:21.206 15:49:00 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:25:21.206 00:25:21.206 --- 10.0.0.1 ping statistics --- 00:25:21.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.206 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:25:21.206 15:49:00 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.206 15:49:00 -- nvmf/common.sh@410 -- # return 0 00:25:21.206 15:49:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:21.206 15:49:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.206 15:49:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:21.206 15:49:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:21.206 15:49:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.206 15:49:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:21.206 15:49:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:21.206 15:49:00 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:21.206 15:49:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:21.206 15:49:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:21.206 15:49:00 -- common/autotest_common.sh@10 -- # set +x 00:25:21.206 15:49:00 -- nvmf/common.sh@469 -- # nvmfpid=2216273 00:25:21.206 15:49:00 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:21.206 15:49:00 -- nvmf/common.sh@470 -- # waitforlisten 2216273 00:25:21.206 15:49:00 -- common/autotest_common.sh@819 -- # '[' -z 2216273 ']' 00:25:21.206 15:49:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.206 15:49:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:21.206 15:49:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.206 15:49:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:21.206 15:49:00 -- common/autotest_common.sh@10 -- # set +x 00:25:21.206 [2024-07-10 15:49:00.338658] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:21.206 [2024-07-10 15:49:00.338752] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.206 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.206 [2024-07-10 15:49:00.408502] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.206 [2024-07-10 15:49:00.522184] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:21.206 [2024-07-10 15:49:00.522353] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.206 [2024-07-10 15:49:00.522373] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.206 [2024-07-10 15:49:00.522389] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.206 [2024-07-10 15:49:00.522438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.141 15:49:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:22.141 15:49:01 -- common/autotest_common.sh@852 -- # return 0 00:25:22.141 15:49:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:22.141 15:49:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:22.141 15:49:01 -- common/autotest_common.sh@10 -- # set +x 00:25:22.141 15:49:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.141 15:49:01 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:22.141 15:49:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.141 15:49:01 -- common/autotest_common.sh@10 -- # set +x 00:25:22.141 [2024-07-10 15:49:01.281488] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.141 15:49:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.141 15:49:01 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:22.141 15:49:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.141 15:49:01 -- common/autotest_common.sh@10 -- # set +x 00:25:22.141 [2024-07-10 15:49:01.289627] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:22.141 15:49:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.141 15:49:01 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:22.141 15:49:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.141 15:49:01 -- common/autotest_common.sh@10 -- # set +x 00:25:22.141 null0 00:25:22.141 15:49:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.141 15:49:01 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:22.141 15:49:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.141 15:49:01 -- common/autotest_common.sh@10 -- # set +x 00:25:22.141 null1 00:25:22.141 15:49:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.141 15:49:01 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:22.141 15:49:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.142 15:49:01 -- common/autotest_common.sh@10 -- # set +x 00:25:22.142 15:49:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.142 15:49:01 -- host/discovery.sh@45 -- # hostpid=2216433 00:25:22.142 15:49:01 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:22.142 15:49:01 -- host/discovery.sh@46 -- # waitforlisten 2216433 /tmp/host.sock 00:25:22.142 15:49:01 -- common/autotest_common.sh@819 -- # '[' -z 2216433 ']' 00:25:22.142 15:49:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:25:22.142 15:49:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:22.142 15:49:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:22.142 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:22.142 15:49:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:22.142 15:49:01 -- common/autotest_common.sh@10 -- # set +x 00:25:22.142 [2024-07-10 15:49:01.357345] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:22.142 [2024-07-10 15:49:01.357420] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2216433 ] 00:25:22.142 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.142 [2024-07-10 15:49:01.419454] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.400 [2024-07-10 15:49:01.534240] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:22.400 [2024-07-10 15:49:01.534418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.965 15:49:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:22.965 15:49:02 -- common/autotest_common.sh@852 -- # return 0 00:25:22.965 15:49:02 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:22.965 15:49:02 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:22.965 15:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.965 15:49:02 -- common/autotest_common.sh@10 -- # set +x 00:25:22.965 15:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.965 15:49:02 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:22.965 15:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.965 15:49:02 -- common/autotest_common.sh@10 -- # set +x 00:25:22.965 15:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:22.965 15:49:02 -- host/discovery.sh@72 -- # notify_id=0 00:25:22.965 15:49:02 -- host/discovery.sh@78 -- # get_subsystem_names 00:25:22.965 15:49:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:22.965 15:49:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:22.965 15:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:22.965 15:49:02 -- common/autotest_common.sh@10 -- # set +x 00:25:22.965 15:49:02 -- host/discovery.sh@59 -- # sort 00:25:22.965 15:49:02 -- host/discovery.sh@59 -- # xargs 00:25:22.965 15:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.224 15:49:02 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:25:23.224 15:49:02 -- host/discovery.sh@79 -- # get_bdev_list 00:25:23.224 15:49:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.224 15:49:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:23.224 15:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.224 15:49:02 -- host/discovery.sh@55 -- # sort 00:25:23.224 15:49:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.224 15:49:02 -- host/discovery.sh@55 -- # xargs 00:25:23.224 15:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.224 15:49:02 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:25:23.224 15:49:02 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:23.224 15:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.224 15:49:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.224 15:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.224 15:49:02 -- host/discovery.sh@82 -- # get_subsystem_names 00:25:23.224 15:49:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:23.224 15:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.224 15:49:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:23.224 15:49:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.224 15:49:02 -- host/discovery.sh@59 -- # sort 00:25:23.224 15:49:02 -- host/discovery.sh@59 -- # xargs 00:25:23.224 15:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.224 15:49:02 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:25:23.224 15:49:02 -- host/discovery.sh@83 -- # get_bdev_list 00:25:23.224 15:49:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.224 15:49:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:23.224 15:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.224 15:49:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.224 15:49:02 -- host/discovery.sh@55 -- # sort 00:25:23.224 15:49:02 -- host/discovery.sh@55 -- # xargs 00:25:23.224 15:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.224 15:49:02 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:23.224 15:49:02 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:23.224 15:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.224 15:49:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.224 15:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.224 15:49:02 -- host/discovery.sh@86 -- # get_subsystem_names 00:25:23.224 15:49:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:23.224 15:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.224 15:49:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.224 15:49:02 -- host/discovery.sh@59 -- # sort 00:25:23.224 15:49:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:23.224 15:49:02 -- host/discovery.sh@59 -- # xargs 00:25:23.224 15:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.224 15:49:02 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:25:23.224 15:49:02 -- host/discovery.sh@87 -- # get_bdev_list 00:25:23.224 15:49:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.224 15:49:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:23.224 15:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.224 15:49:02 -- host/discovery.sh@55 -- # sort 00:25:23.224 15:49:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.224 15:49:02 -- host/discovery.sh@55 -- # xargs 00:25:23.224 15:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.224 15:49:02 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:23.224 15:49:02 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:23.224 15:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.224 15:49:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.224 [2024-07-10 15:49:02.573145] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:23.224 15:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.224 15:49:02 -- host/discovery.sh@92 -- # get_subsystem_names 00:25:23.224 15:49:02 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:23.224 15:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.224 15:49:02 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:23.224 15:49:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.224 15:49:02 -- host/discovery.sh@59 -- # sort 00:25:23.224 15:49:02 -- host/discovery.sh@59 -- # xargs 00:25:23.224 15:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.482 15:49:02 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:23.482 15:49:02 -- host/discovery.sh@93 -- # get_bdev_list 00:25:23.482 15:49:02 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:23.482 15:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.482 15:49:02 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:23.482 15:49:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.482 15:49:02 -- host/discovery.sh@55 -- # sort 00:25:23.482 15:49:02 -- host/discovery.sh@55 -- # xargs 00:25:23.482 15:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.482 15:49:02 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:25:23.482 15:49:02 -- host/discovery.sh@94 -- # get_notification_count 00:25:23.482 15:49:02 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:23.482 15:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.482 15:49:02 -- host/discovery.sh@74 -- # jq '. | length' 00:25:23.482 15:49:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.482 15:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.482 15:49:02 -- host/discovery.sh@74 -- # notification_count=0 00:25:23.482 15:49:02 -- host/discovery.sh@75 -- # notify_id=0 00:25:23.482 15:49:02 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:25:23.482 15:49:02 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:23.482 15:49:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:23.482 15:49:02 -- common/autotest_common.sh@10 -- # set +x 00:25:23.482 15:49:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:23.482 15:49:02 -- host/discovery.sh@100 -- # sleep 1 00:25:24.052 [2024-07-10 15:49:03.351617] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:24.052 [2024-07-10 15:49:03.351657] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:24.052 [2024-07-10 15:49:03.351680] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:24.309 [2024-07-10 15:49:03.437967] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:24.309 [2024-07-10 15:49:03.542057] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:24.309 [2024-07-10 15:49:03.542086] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:24.566 15:49:03 -- host/discovery.sh@101 -- # get_subsystem_names 00:25:24.566 15:49:03 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:24.566 15:49:03 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:24.566 15:49:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.566 15:49:03 -- common/autotest_common.sh@10 -- # set +x 00:25:24.566 15:49:03 -- host/discovery.sh@59 -- # sort 00:25:24.566 15:49:03 -- host/discovery.sh@59 -- # xargs 00:25:24.566 15:49:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.566 15:49:03 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.566 15:49:03 -- host/discovery.sh@102 -- # get_bdev_list 00:25:24.566 15:49:03 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:24.566 15:49:03 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:24.566 15:49:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.566 15:49:03 -- common/autotest_common.sh@10 -- # set +x 00:25:24.566 15:49:03 -- host/discovery.sh@55 -- # sort 00:25:24.566 15:49:03 -- host/discovery.sh@55 -- # xargs 00:25:24.566 15:49:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.566 15:49:03 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:24.566 15:49:03 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:25:24.566 15:49:03 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:24.566 15:49:03 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:24.566 15:49:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.566 15:49:03 -- common/autotest_common.sh@10 -- # set +x 00:25:24.566 15:49:03 -- host/discovery.sh@63 -- # sort -n 00:25:24.566 15:49:03 -- host/discovery.sh@63 -- # xargs 00:25:24.566 15:49:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.566 15:49:03 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:25:24.566 15:49:03 -- host/discovery.sh@104 -- # get_notification_count 00:25:24.566 15:49:03 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:24.566 15:49:03 -- host/discovery.sh@74 -- # jq '. | length' 00:25:24.566 15:49:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.566 15:49:03 -- common/autotest_common.sh@10 -- # set +x 00:25:24.566 15:49:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.566 15:49:03 -- host/discovery.sh@74 -- # notification_count=1 00:25:24.566 15:49:03 -- host/discovery.sh@75 -- # notify_id=1 00:25:24.566 15:49:03 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:25:24.566 15:49:03 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:24.566 15:49:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.566 15:49:03 -- common/autotest_common.sh@10 -- # set +x 00:25:24.566 15:49:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.566 15:49:03 -- host/discovery.sh@109 -- # sleep 1 00:25:25.939 15:49:04 -- host/discovery.sh@110 -- # get_bdev_list 00:25:25.939 15:49:04 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:25.939 15:49:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.939 15:49:04 -- common/autotest_common.sh@10 -- # set +x 00:25:25.939 15:49:04 -- host/discovery.sh@55 -- # sort 00:25:25.939 15:49:04 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:25.939 15:49:04 -- host/discovery.sh@55 -- # xargs 00:25:25.939 15:49:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.939 15:49:04 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:25.939 15:49:04 -- host/discovery.sh@111 -- # get_notification_count 00:25:25.939 15:49:04 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:25.939 15:49:04 -- host/discovery.sh@74 -- # jq '. | length' 00:25:25.939 15:49:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.939 15:49:04 -- common/autotest_common.sh@10 -- # set +x 00:25:25.939 15:49:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.939 15:49:04 -- host/discovery.sh@74 -- # notification_count=1 00:25:25.939 15:49:04 -- host/discovery.sh@75 -- # notify_id=2 00:25:25.939 15:49:04 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:25:25.939 15:49:04 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:25.939 15:49:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.939 15:49:04 -- common/autotest_common.sh@10 -- # set +x 00:25:25.939 [2024-07-10 15:49:04.992660] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:25.939 [2024-07-10 15:49:04.992934] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:25.939 [2024-07-10 15:49:04.992970] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:25.939 15:49:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.939 15:49:04 -- host/discovery.sh@117 -- # sleep 1 00:25:25.939 [2024-07-10 15:49:05.119363] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:26.197 [2024-07-10 15:49:05.378673] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:26.197 [2024-07-10 15:49:05.378709] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:26.197 [2024-07-10 15:49:05.378720] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:26.762 15:49:05 -- host/discovery.sh@118 -- # get_subsystem_names 00:25:26.762 15:49:05 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:26.762 15:49:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:26.762 15:49:06 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:26.762 15:49:06 -- common/autotest_common.sh@10 -- # set +x 00:25:26.762 15:49:06 -- host/discovery.sh@59 -- # sort 00:25:26.762 15:49:06 -- host/discovery.sh@59 -- # xargs 00:25:26.762 15:49:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:26.762 15:49:06 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.762 15:49:06 -- host/discovery.sh@119 -- # get_bdev_list 00:25:26.762 15:49:06 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:26.762 15:49:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:26.762 15:49:06 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:26.762 15:49:06 -- common/autotest_common.sh@10 -- # set +x 00:25:26.762 15:49:06 -- host/discovery.sh@55 -- # sort 00:25:26.762 15:49:06 -- host/discovery.sh@55 -- # xargs 00:25:26.762 15:49:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:26.762 15:49:06 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:26.762 15:49:06 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:25:26.762 15:49:06 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:26.762 15:49:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:26.762 15:49:06 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:26.762 15:49:06 -- common/autotest_common.sh@10 -- # set +x 00:25:26.762 15:49:06 -- host/discovery.sh@63 -- # sort -n 00:25:26.762 15:49:06 -- host/discovery.sh@63 -- # xargs 00:25:26.762 15:49:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:26.762 15:49:06 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:26.762 15:49:06 -- host/discovery.sh@121 -- # get_notification_count 00:25:26.762 15:49:06 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:26.762 15:49:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:26.762 15:49:06 -- common/autotest_common.sh@10 -- # set +x 00:25:26.762 15:49:06 -- host/discovery.sh@74 -- # jq '. | length' 00:25:26.762 15:49:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.021 15:49:06 -- host/discovery.sh@74 -- # notification_count=0 00:25:27.022 15:49:06 -- host/discovery.sh@75 -- # notify_id=2 00:25:27.022 15:49:06 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:25:27.022 15:49:06 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:27.022 15:49:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.022 15:49:06 -- common/autotest_common.sh@10 -- # set +x 00:25:27.022 [2024-07-10 15:49:06.165339] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:27.022 [2024-07-10 15:49:06.165375] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:27.022 15:49:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.022 15:49:06 -- host/discovery.sh@127 -- # sleep 1 00:25:27.022 [2024-07-10 15:49:06.171923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.022 [2024-07-10 15:49:06.171957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.022 [2024-07-10 15:49:06.171983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.022 [2024-07-10 15:49:06.171998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.022 [2024-07-10 15:49:06.172013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.022 [2024-07-10 15:49:06.172028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.022 [2024-07-10 15:49:06.172058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:27.022 [2024-07-10 15:49:06.172071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:27.022 [2024-07-10 15:49:06.172092] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a1b80 is same with the state(5) to be set 00:25:27.022 [2024-07-10 15:49:06.181930] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a1b80 (9): Bad file descriptor 00:25:27.022 [2024-07-10 15:49:06.191980] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:27.022 [2024-07-10 15:49:06.192188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.022 [2024-07-10 15:49:06.192353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.022 [2024-07-10 15:49:06.192382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a1b80 with addr=10.0.0.2, port=4420 00:25:27.022 [2024-07-10 15:49:06.192402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a1b80 is same with the state(5) to be set 00:25:27.022 [2024-07-10 15:49:06.192435] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a1b80 (9): Bad file descriptor 00:25:27.022 [2024-07-10 15:49:06.192491] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:27.022 [2024-07-10 15:49:06.192511] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:27.022 [2024-07-10 15:49:06.192525] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:27.022 [2024-07-10 15:49:06.192546] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.022 [2024-07-10 15:49:06.202070] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:27.022 [2024-07-10 15:49:06.202298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.022 [2024-07-10 15:49:06.202472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.022 [2024-07-10 15:49:06.202500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a1b80 with addr=10.0.0.2, port=4420 00:25:27.022 [2024-07-10 15:49:06.202516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a1b80 is same with the state(5) to be set 00:25:27.022 [2024-07-10 15:49:06.202538] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a1b80 (9): Bad file descriptor 00:25:27.022 [2024-07-10 15:49:06.202572] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:27.022 [2024-07-10 15:49:06.202589] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:27.022 [2024-07-10 15:49:06.202602] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:27.022 [2024-07-10 15:49:06.202622] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.022 [2024-07-10 15:49:06.212148] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:27.022 [2024-07-10 15:49:06.212400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.022 [2024-07-10 15:49:06.212580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.022 [2024-07-10 15:49:06.212609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a1b80 with addr=10.0.0.2, port=4420 00:25:27.022 [2024-07-10 15:49:06.212625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a1b80 is same with the state(5) to be set 00:25:27.022 [2024-07-10 15:49:06.212648] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a1b80 (9): Bad file descriptor 00:25:27.022 [2024-07-10 15:49:06.212713] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:27.022 [2024-07-10 15:49:06.212735] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:27.022 [2024-07-10 15:49:06.212750] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:27.022 [2024-07-10 15:49:06.212791] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.022 [2024-07-10 15:49:06.222227] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:27.022 [2024-07-10 15:49:06.222469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.022 [2024-07-10 15:49:06.222599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.022 [2024-07-10 15:49:06.222625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a1b80 with addr=10.0.0.2, port=4420 00:25:27.022 [2024-07-10 15:49:06.222641] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a1b80 is same with the state(5) to be set 00:25:27.022 [2024-07-10 15:49:06.222664] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a1b80 (9): Bad file descriptor 00:25:27.022 [2024-07-10 15:49:06.222685] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:27.022 [2024-07-10 15:49:06.222699] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:27.022 [2024-07-10 15:49:06.222712] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:27.022 [2024-07-10 15:49:06.222744] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.022 [2024-07-10 15:49:06.232304] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:27.022 [2024-07-10 15:49:06.232518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.022 [2024-07-10 15:49:06.232714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.022 [2024-07-10 15:49:06.232756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a1b80 with addr=10.0.0.2, port=4420 00:25:27.022 [2024-07-10 15:49:06.232774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a1b80 is same with the state(5) to be set 00:25:27.022 [2024-07-10 15:49:06.232799] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a1b80 (9): Bad file descriptor 00:25:27.022 [2024-07-10 15:49:06.232849] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:27.022 [2024-07-10 15:49:06.232871] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:27.022 [2024-07-10 15:49:06.232887] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:27.022 [2024-07-10 15:49:06.232907] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.022 [2024-07-10 15:49:06.242380] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:27.022 [2024-07-10 15:49:06.242661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.022 [2024-07-10 15:49:06.242851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.022 [2024-07-10 15:49:06.242880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a1b80 with addr=10.0.0.2, port=4420 00:25:27.022 [2024-07-10 15:49:06.242897] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a1b80 is same with the state(5) to be set 00:25:27.022 [2024-07-10 15:49:06.242922] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a1b80 (9): Bad file descriptor 00:25:27.022 [2024-07-10 15:49:06.242958] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:27.022 [2024-07-10 15:49:06.242977] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:27.022 [2024-07-10 15:49:06.242992] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:27.022 [2024-07-10 15:49:06.243013] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.022 [2024-07-10 15:49:06.252457] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:27.022 [2024-07-10 15:49:06.252651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.022 [2024-07-10 15:49:06.252802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:27.022 [2024-07-10 15:49:06.252829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a1b80 with addr=10.0.0.2, port=4420 00:25:27.022 [2024-07-10 15:49:06.252845] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a1b80 is same with the state(5) to be set 00:25:27.022 [2024-07-10 15:49:06.252867] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a1b80 (9): Bad file descriptor 00:25:27.022 [2024-07-10 15:49:06.252915] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:27.022 [2024-07-10 15:49:06.252935] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:27.022 [2024-07-10 15:49:06.252950] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:27.022 [2024-07-10 15:49:06.252972] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:27.022 [2024-07-10 15:49:06.253020] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:27.022 [2024-07-10 15:49:06.253049] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:27.956 15:49:07 -- host/discovery.sh@128 -- # get_subsystem_names 00:25:27.956 15:49:07 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:27.956 15:49:07 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:27.956 15:49:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.956 15:49:07 -- host/discovery.sh@59 -- # sort 00:25:27.956 15:49:07 -- common/autotest_common.sh@10 -- # set +x 00:25:27.956 15:49:07 -- host/discovery.sh@59 -- # xargs 00:25:27.956 15:49:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.956 15:49:07 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.956 15:49:07 -- host/discovery.sh@129 -- # get_bdev_list 00:25:27.956 15:49:07 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:27.956 15:49:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.956 15:49:07 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:27.956 15:49:07 -- common/autotest_common.sh@10 -- # set +x 00:25:27.956 15:49:07 -- host/discovery.sh@55 -- # sort 00:25:27.956 15:49:07 -- host/discovery.sh@55 -- # xargs 00:25:27.956 15:49:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.956 15:49:07 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:27.956 15:49:07 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:25:27.956 15:49:07 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:27.956 15:49:07 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:27.956 15:49:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.956 15:49:07 -- host/discovery.sh@63 -- # sort -n 00:25:27.956 15:49:07 -- common/autotest_common.sh@10 -- # set +x 00:25:27.956 15:49:07 -- host/discovery.sh@63 -- # xargs 00:25:27.956 15:49:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.956 15:49:07 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:25:27.956 15:49:07 -- host/discovery.sh@131 -- # get_notification_count 00:25:27.956 15:49:07 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:27.956 15:49:07 -- host/discovery.sh@74 -- # jq '. | length' 00:25:27.956 15:49:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.956 15:49:07 -- common/autotest_common.sh@10 -- # set +x 00:25:27.956 15:49:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.213 15:49:07 -- host/discovery.sh@74 -- # notification_count=0 00:25:28.213 15:49:07 -- host/discovery.sh@75 -- # notify_id=2 00:25:28.213 15:49:07 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:25:28.213 15:49:07 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:28.213 15:49:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:28.213 15:49:07 -- common/autotest_common.sh@10 -- # set +x 00:25:28.213 15:49:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:28.213 15:49:07 -- host/discovery.sh@135 -- # sleep 1 00:25:29.140 15:49:08 -- host/discovery.sh@136 -- # get_subsystem_names 00:25:29.140 15:49:08 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:29.140 15:49:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.140 15:49:08 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:29.140 15:49:08 -- common/autotest_common.sh@10 -- # set +x 00:25:29.140 15:49:08 -- host/discovery.sh@59 -- # sort 00:25:29.140 15:49:08 -- host/discovery.sh@59 -- # xargs 00:25:29.140 15:49:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.140 15:49:08 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:25:29.140 15:49:08 -- host/discovery.sh@137 -- # get_bdev_list 00:25:29.140 15:49:08 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:29.140 15:49:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.140 15:49:08 -- common/autotest_common.sh@10 -- # set +x 00:25:29.140 15:49:08 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:29.140 15:49:08 -- host/discovery.sh@55 -- # sort 00:25:29.140 15:49:08 -- host/discovery.sh@55 -- # xargs 00:25:29.140 15:49:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.140 15:49:08 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:25:29.140 15:49:08 -- host/discovery.sh@138 -- # get_notification_count 00:25:29.140 15:49:08 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:29.140 15:49:08 -- host/discovery.sh@74 -- # jq '. | length' 00:25:29.140 15:49:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.140 15:49:08 -- common/autotest_common.sh@10 -- # set +x 00:25:29.140 15:49:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:29.140 15:49:08 -- host/discovery.sh@74 -- # notification_count=2 00:25:29.140 15:49:08 -- host/discovery.sh@75 -- # notify_id=4 00:25:29.140 15:49:08 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:25:29.140 15:49:08 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:29.140 15:49:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:29.140 15:49:08 -- common/autotest_common.sh@10 -- # set +x 00:25:30.511 [2024-07-10 15:49:09.547393] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:30.511 [2024-07-10 15:49:09.547421] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:30.511 [2024-07-10 15:49:09.547467] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:30.511 [2024-07-10 15:49:09.674876] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:30.511 [2024-07-10 15:49:09.741894] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:30.511 [2024-07-10 15:49:09.741935] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:30.511 15:49:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.511 15:49:09 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:30.511 15:49:09 -- common/autotest_common.sh@640 -- # local es=0 00:25:30.511 15:49:09 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:30.511 15:49:09 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:30.512 15:49:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:30.512 15:49:09 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:30.512 15:49:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:30.512 15:49:09 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:30.512 15:49:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.512 15:49:09 -- common/autotest_common.sh@10 -- # set +x 00:25:30.512 request: 00:25:30.512 { 00:25:30.512 "name": "nvme", 00:25:30.512 "trtype": "tcp", 00:25:30.512 "traddr": "10.0.0.2", 00:25:30.512 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:30.512 "adrfam": "ipv4", 00:25:30.512 "trsvcid": "8009", 00:25:30.512 "wait_for_attach": true, 00:25:30.512 "method": "bdev_nvme_start_discovery", 00:25:30.512 "req_id": 1 00:25:30.512 } 00:25:30.512 Got JSON-RPC error response 00:25:30.512 response: 00:25:30.512 { 00:25:30.512 "code": -17, 00:25:30.512 "message": "File exists" 00:25:30.512 } 00:25:30.512 15:49:09 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:30.512 15:49:09 -- common/autotest_common.sh@643 -- # es=1 00:25:30.512 15:49:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:30.512 15:49:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:30.512 15:49:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:30.512 15:49:09 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:25:30.512 15:49:09 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:30.512 15:49:09 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:30.512 15:49:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.512 15:49:09 -- common/autotest_common.sh@10 -- # set +x 00:25:30.512 15:49:09 -- host/discovery.sh@67 -- # sort 00:25:30.512 15:49:09 -- host/discovery.sh@67 -- # xargs 00:25:30.512 15:49:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.512 15:49:09 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:25:30.512 15:49:09 -- host/discovery.sh@147 -- # get_bdev_list 00:25:30.512 15:49:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.512 15:49:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.512 15:49:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.512 15:49:09 -- common/autotest_common.sh@10 -- # set +x 00:25:30.512 15:49:09 -- host/discovery.sh@55 -- # sort 00:25:30.512 15:49:09 -- host/discovery.sh@55 -- # xargs 00:25:30.512 15:49:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.512 15:49:09 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:30.512 15:49:09 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:30.512 15:49:09 -- common/autotest_common.sh@640 -- # local es=0 00:25:30.512 15:49:09 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:30.512 15:49:09 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:30.512 15:49:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:30.512 15:49:09 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:30.512 15:49:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:30.512 15:49:09 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:30.512 15:49:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.512 15:49:09 -- common/autotest_common.sh@10 -- # set +x 00:25:30.512 request: 00:25:30.512 { 00:25:30.512 "name": "nvme_second", 00:25:30.512 "trtype": "tcp", 00:25:30.512 "traddr": "10.0.0.2", 00:25:30.512 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:30.512 "adrfam": "ipv4", 00:25:30.512 "trsvcid": "8009", 00:25:30.512 "wait_for_attach": true, 00:25:30.512 "method": "bdev_nvme_start_discovery", 00:25:30.512 "req_id": 1 00:25:30.512 } 00:25:30.512 Got JSON-RPC error response 00:25:30.512 response: 00:25:30.512 { 00:25:30.512 "code": -17, 00:25:30.512 "message": "File exists" 00:25:30.512 } 00:25:30.512 15:49:09 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:30.512 15:49:09 -- common/autotest_common.sh@643 -- # es=1 00:25:30.512 15:49:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:30.512 15:49:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:30.512 15:49:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:30.512 15:49:09 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:25:30.512 15:49:09 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:30.512 15:49:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.512 15:49:09 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:30.512 15:49:09 -- common/autotest_common.sh@10 -- # set +x 00:25:30.512 15:49:09 -- host/discovery.sh@67 -- # sort 00:25:30.512 15:49:09 -- host/discovery.sh@67 -- # xargs 00:25:30.512 15:49:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.512 15:49:09 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:25:30.512 15:49:09 -- host/discovery.sh@153 -- # get_bdev_list 00:25:30.770 15:49:09 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:30.770 15:49:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.770 15:49:09 -- common/autotest_common.sh@10 -- # set +x 00:25:30.770 15:49:09 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:30.770 15:49:09 -- host/discovery.sh@55 -- # sort 00:25:30.770 15:49:09 -- host/discovery.sh@55 -- # xargs 00:25:30.770 15:49:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:30.770 15:49:09 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:30.770 15:49:09 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:30.770 15:49:09 -- common/autotest_common.sh@640 -- # local es=0 00:25:30.770 15:49:09 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:30.770 15:49:09 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:30.770 15:49:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:30.770 15:49:09 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:30.770 15:49:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:30.770 15:49:09 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:30.770 15:49:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:30.770 15:49:09 -- common/autotest_common.sh@10 -- # set +x 00:25:31.701 [2024-07-10 15:49:10.933366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.701 [2024-07-10 15:49:10.933575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.701 [2024-07-10 15:49:10.933604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1897070 with addr=10.0.0.2, port=8010 00:25:31.701 [2024-07-10 15:49:10.933633] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:31.701 [2024-07-10 15:49:10.933648] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:31.701 [2024-07-10 15:49:10.933662] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:32.633 [2024-07-10 15:49:11.935747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.633 [2024-07-10 15:49:11.935949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.633 [2024-07-10 15:49:11.935980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1897070 with addr=10.0.0.2, port=8010 00:25:32.633 [2024-07-10 15:49:11.936003] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:32.633 [2024-07-10 15:49:11.936018] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:32.633 [2024-07-10 15:49:11.936031] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:33.564 [2024-07-10 15:49:12.937961] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:33.564 request: 00:25:33.564 { 00:25:33.564 "name": "nvme_second", 00:25:33.564 "trtype": "tcp", 00:25:33.564 "traddr": "10.0.0.2", 00:25:33.564 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:33.564 "adrfam": "ipv4", 00:25:33.564 "trsvcid": "8010", 00:25:33.564 "attach_timeout_ms": 3000, 00:25:33.564 "method": "bdev_nvme_start_discovery", 00:25:33.564 "req_id": 1 00:25:33.564 } 00:25:33.820 Got JSON-RPC error response 00:25:33.820 response: 00:25:33.820 { 00:25:33.820 "code": -110, 00:25:33.820 "message": "Connection timed out" 00:25:33.820 } 00:25:33.820 15:49:12 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:33.820 15:49:12 -- common/autotest_common.sh@643 -- # es=1 00:25:33.820 15:49:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:33.820 15:49:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:33.820 15:49:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:33.820 15:49:12 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:25:33.820 15:49:12 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:33.820 15:49:12 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:33.820 15:49:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.820 15:49:12 -- host/discovery.sh@67 -- # sort 00:25:33.820 15:49:12 -- common/autotest_common.sh@10 -- # set +x 00:25:33.820 15:49:12 -- host/discovery.sh@67 -- # xargs 00:25:33.820 15:49:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.820 15:49:12 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:25:33.820 15:49:12 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:25:33.820 15:49:12 -- host/discovery.sh@162 -- # kill 2216433 00:25:33.820 15:49:12 -- host/discovery.sh@163 -- # nvmftestfini 00:25:33.820 15:49:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:33.820 15:49:12 -- nvmf/common.sh@116 -- # sync 00:25:33.820 15:49:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:33.820 15:49:12 -- nvmf/common.sh@119 -- # set +e 00:25:33.820 15:49:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:33.820 15:49:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:33.820 rmmod nvme_tcp 00:25:33.820 rmmod nvme_fabrics 00:25:33.820 rmmod nvme_keyring 00:25:33.820 15:49:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:33.820 15:49:13 -- nvmf/common.sh@123 -- # set -e 00:25:33.820 15:49:13 -- nvmf/common.sh@124 -- # return 0 00:25:33.820 15:49:13 -- nvmf/common.sh@477 -- # '[' -n 2216273 ']' 00:25:33.820 15:49:13 -- nvmf/common.sh@478 -- # killprocess 2216273 00:25:33.820 15:49:13 -- common/autotest_common.sh@926 -- # '[' -z 2216273 ']' 00:25:33.820 15:49:13 -- common/autotest_common.sh@930 -- # kill -0 2216273 00:25:33.820 15:49:13 -- common/autotest_common.sh@931 -- # uname 00:25:33.820 15:49:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:33.820 15:49:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2216273 00:25:33.820 15:49:13 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:33.820 15:49:13 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:33.820 15:49:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2216273' 00:25:33.820 killing process with pid 2216273 00:25:33.820 15:49:13 -- common/autotest_common.sh@945 -- # kill 2216273 00:25:33.820 15:49:13 -- common/autotest_common.sh@950 -- # wait 2216273 00:25:34.077 15:49:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:34.077 15:49:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:34.077 15:49:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:34.077 15:49:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:34.077 15:49:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:34.077 15:49:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:34.077 15:49:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:34.077 15:49:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.610 15:49:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:36.610 00:25:36.610 real 0m17.219s 00:25:36.610 user 0m26.637s 00:25:36.610 sys 0m2.991s 00:25:36.610 15:49:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:36.610 15:49:15 -- common/autotest_common.sh@10 -- # set +x 00:25:36.610 ************************************ 00:25:36.610 END TEST nvmf_discovery 00:25:36.610 ************************************ 00:25:36.610 15:49:15 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:36.610 15:49:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:36.610 15:49:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:36.610 15:49:15 -- common/autotest_common.sh@10 -- # set +x 00:25:36.610 ************************************ 00:25:36.610 START TEST nvmf_discovery_remove_ifc 00:25:36.610 ************************************ 00:25:36.610 15:49:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:25:36.610 * Looking for test storage... 00:25:36.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:36.610 15:49:15 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:36.610 15:49:15 -- nvmf/common.sh@7 -- # uname -s 00:25:36.610 15:49:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.610 15:49:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.610 15:49:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.610 15:49:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.610 15:49:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:36.610 15:49:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:36.610 15:49:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.610 15:49:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:36.610 15:49:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.610 15:49:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:36.610 15:49:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:36.610 15:49:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:36.610 15:49:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.610 15:49:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:36.610 15:49:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:36.610 15:49:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:36.610 15:49:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.610 15:49:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.610 15:49:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.610 15:49:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.610 15:49:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.610 15:49:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.610 15:49:15 -- paths/export.sh@5 -- # export PATH 00:25:36.610 15:49:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.610 15:49:15 -- nvmf/common.sh@46 -- # : 0 00:25:36.610 15:49:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:36.610 15:49:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:36.610 15:49:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:36.610 15:49:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.610 15:49:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.610 15:49:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:36.610 15:49:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:36.610 15:49:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:36.610 15:49:15 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:25:36.610 15:49:15 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:25:36.610 15:49:15 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:25:36.610 15:49:15 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:25:36.610 15:49:15 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:25:36.610 15:49:15 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:25:36.611 15:49:15 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:25:36.611 15:49:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:36.611 15:49:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.611 15:49:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:36.611 15:49:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:36.611 15:49:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:36.611 15:49:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.611 15:49:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.611 15:49:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.611 15:49:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:36.611 15:49:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:36.611 15:49:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:36.611 15:49:15 -- common/autotest_common.sh@10 -- # set +x 00:25:38.513 15:49:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:38.513 15:49:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:38.513 15:49:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:38.513 15:49:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:38.513 15:49:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:38.513 15:49:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:38.513 15:49:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:38.513 15:49:17 -- nvmf/common.sh@294 -- # net_devs=() 00:25:38.513 15:49:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:38.513 15:49:17 -- nvmf/common.sh@295 -- # e810=() 00:25:38.513 15:49:17 -- nvmf/common.sh@295 -- # local -ga e810 00:25:38.513 15:49:17 -- nvmf/common.sh@296 -- # x722=() 00:25:38.513 15:49:17 -- nvmf/common.sh@296 -- # local -ga x722 00:25:38.513 15:49:17 -- nvmf/common.sh@297 -- # mlx=() 00:25:38.513 15:49:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:38.513 15:49:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:38.513 15:49:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:38.513 15:49:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:38.513 15:49:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:38.513 15:49:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:38.513 15:49:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:38.513 15:49:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:38.513 15:49:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:38.513 15:49:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:38.513 15:49:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:38.513 15:49:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:38.513 15:49:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:38.513 15:49:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:38.513 15:49:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:38.513 15:49:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:38.513 15:49:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:38.513 15:49:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:38.513 15:49:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:38.513 15:49:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:38.513 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:38.513 15:49:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:38.513 15:49:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:38.513 15:49:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.513 15:49:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.513 15:49:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:38.513 15:49:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:38.513 15:49:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:38.513 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:38.513 15:49:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:38.513 15:49:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:38.513 15:49:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:38.513 15:49:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:38.513 15:49:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:38.513 15:49:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:38.513 15:49:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:38.513 15:49:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:38.513 15:49:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:38.513 15:49:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.513 15:49:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:38.513 15:49:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.513 15:49:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:38.513 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:38.513 15:49:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.513 15:49:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:38.513 15:49:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:38.513 15:49:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:38.513 15:49:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:38.513 15:49:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:38.513 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:38.513 15:49:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:38.513 15:49:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:38.513 15:49:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:38.513 15:49:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:38.513 15:49:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:38.513 15:49:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:38.513 15:49:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:38.513 15:49:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:38.513 15:49:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:38.513 15:49:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:38.513 15:49:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:38.513 15:49:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:38.513 15:49:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:38.513 15:49:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:38.513 15:49:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:38.513 15:49:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:38.513 15:49:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:38.513 15:49:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:38.513 15:49:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:38.513 15:49:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:38.513 15:49:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:38.513 15:49:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:38.513 15:49:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:38.513 15:49:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:38.513 15:49:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:38.513 15:49:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:38.513 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:38.513 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:25:38.513 00:25:38.513 --- 10.0.0.2 ping statistics --- 00:25:38.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.513 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:25:38.513 15:49:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:38.513 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:38.513 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:25:38.513 00:25:38.513 --- 10.0.0.1 ping statistics --- 00:25:38.513 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:38.513 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:25:38.513 15:49:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:38.513 15:49:17 -- nvmf/common.sh@410 -- # return 0 00:25:38.513 15:49:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:38.513 15:49:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:38.513 15:49:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:38.513 15:49:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:38.513 15:49:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:38.513 15:49:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:38.513 15:49:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:38.513 15:49:17 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:25:38.513 15:49:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:38.513 15:49:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:38.513 15:49:17 -- common/autotest_common.sh@10 -- # set +x 00:25:38.513 15:49:17 -- nvmf/common.sh@469 -- # nvmfpid=2219903 00:25:38.513 15:49:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:38.513 15:49:17 -- nvmf/common.sh@470 -- # waitforlisten 2219903 00:25:38.513 15:49:17 -- common/autotest_common.sh@819 -- # '[' -z 2219903 ']' 00:25:38.513 15:49:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:38.513 15:49:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:38.513 15:49:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:38.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:38.513 15:49:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:38.513 15:49:17 -- common/autotest_common.sh@10 -- # set +x 00:25:38.513 [2024-07-10 15:49:17.574062] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:38.513 [2024-07-10 15:49:17.574139] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:38.513 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.513 [2024-07-10 15:49:17.639141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.513 [2024-07-10 15:49:17.747137] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:38.513 [2024-07-10 15:49:17.747291] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.514 [2024-07-10 15:49:17.747308] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.514 [2024-07-10 15:49:17.747320] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.514 [2024-07-10 15:49:17.747347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:39.447 15:49:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:39.447 15:49:18 -- common/autotest_common.sh@852 -- # return 0 00:25:39.447 15:49:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:39.447 15:49:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:39.447 15:49:18 -- common/autotest_common.sh@10 -- # set +x 00:25:39.447 15:49:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:39.447 15:49:18 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:25:39.447 15:49:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.447 15:49:18 -- common/autotest_common.sh@10 -- # set +x 00:25:39.447 [2024-07-10 15:49:18.541732] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:39.447 [2024-07-10 15:49:18.549911] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:39.447 null0 00:25:39.447 [2024-07-10 15:49:18.581883] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.447 15:49:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.447 15:49:18 -- host/discovery_remove_ifc.sh@59 -- # hostpid=2220065 00:25:39.447 15:49:18 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:25:39.447 15:49:18 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2220065 /tmp/host.sock 00:25:39.447 15:49:18 -- common/autotest_common.sh@819 -- # '[' -z 2220065 ']' 00:25:39.447 15:49:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:25:39.447 15:49:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:39.447 15:49:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:39.447 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:39.447 15:49:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:39.447 15:49:18 -- common/autotest_common.sh@10 -- # set +x 00:25:39.447 [2024-07-10 15:49:18.641106] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:39.447 [2024-07-10 15:49:18.641169] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2220065 ] 00:25:39.447 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.447 [2024-07-10 15:49:18.701768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.447 [2024-07-10 15:49:18.821065] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:39.447 [2024-07-10 15:49:18.821219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:40.382 15:49:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:40.382 15:49:19 -- common/autotest_common.sh@852 -- # return 0 00:25:40.382 15:49:19 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:40.382 15:49:19 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:25:40.382 15:49:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.382 15:49:19 -- common/autotest_common.sh@10 -- # set +x 00:25:40.382 15:49:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.382 15:49:19 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:25:40.382 15:49:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.382 15:49:19 -- common/autotest_common.sh@10 -- # set +x 00:25:40.382 15:49:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:40.382 15:49:19 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:25:40.382 15:49:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:40.382 15:49:19 -- common/autotest_common.sh@10 -- # set +x 00:25:41.758 [2024-07-10 15:49:20.741612] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:41.758 [2024-07-10 15:49:20.741650] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:41.758 [2024-07-10 15:49:20.741673] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:41.758 [2024-07-10 15:49:20.827969] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:41.758 [2024-07-10 15:49:20.891846] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:41.758 [2024-07-10 15:49:20.891901] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:41.758 [2024-07-10 15:49:20.891946] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:41.758 [2024-07-10 15:49:20.891973] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:41.758 [2024-07-10 15:49:20.892009] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:41.758 15:49:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.758 15:49:20 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:25:41.758 15:49:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:41.758 15:49:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.758 15:49:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:41.758 15:49:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.758 15:49:20 -- common/autotest_common.sh@10 -- # set +x 00:25:41.758 15:49:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:41.758 15:49:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:41.758 [2024-07-10 15:49:20.899923] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x172a1f0 was disconnected and freed. delete nvme_qpair. 00:25:41.758 15:49:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.758 15:49:20 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:25:41.758 15:49:20 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:25:41.758 15:49:20 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:25:41.758 15:49:20 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:25:41.758 15:49:20 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:41.758 15:49:20 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:41.758 15:49:20 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:41.758 15:49:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:41.758 15:49:20 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:41.758 15:49:20 -- common/autotest_common.sh@10 -- # set +x 00:25:41.758 15:49:20 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:41.758 15:49:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:41.758 15:49:21 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:41.758 15:49:21 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:42.693 15:49:22 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:42.693 15:49:22 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:42.693 15:49:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:42.693 15:49:22 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:42.693 15:49:22 -- common/autotest_common.sh@10 -- # set +x 00:25:42.693 15:49:22 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:42.693 15:49:22 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:42.693 15:49:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:42.951 15:49:22 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:42.951 15:49:22 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:43.884 15:49:23 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:43.884 15:49:23 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:43.884 15:49:23 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:43.884 15:49:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.884 15:49:23 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:43.884 15:49:23 -- common/autotest_common.sh@10 -- # set +x 00:25:43.884 15:49:23 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:43.884 15:49:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.884 15:49:23 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:43.884 15:49:23 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:44.816 15:49:24 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:44.816 15:49:24 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:44.816 15:49:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:44.816 15:49:24 -- common/autotest_common.sh@10 -- # set +x 00:25:44.816 15:49:24 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:44.816 15:49:24 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:44.816 15:49:24 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:44.816 15:49:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:44.816 15:49:24 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:44.816 15:49:24 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:46.188 15:49:25 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:46.188 15:49:25 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:46.188 15:49:25 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:46.188 15:49:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:46.188 15:49:25 -- common/autotest_common.sh@10 -- # set +x 00:25:46.188 15:49:25 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:46.188 15:49:25 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:46.188 15:49:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:46.188 15:49:25 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:46.188 15:49:25 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:47.120 15:49:26 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:47.120 15:49:26 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:47.120 15:49:26 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:47.120 15:49:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:47.120 15:49:26 -- common/autotest_common.sh@10 -- # set +x 00:25:47.120 15:49:26 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:47.120 15:49:26 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:47.120 15:49:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:47.120 15:49:26 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:47.120 15:49:26 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:47.120 [2024-07-10 15:49:26.332868] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:25:47.120 [2024-07-10 15:49:26.332935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.120 [2024-07-10 15:49:26.332958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.120 [2024-07-10 15:49:26.332978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.120 [2024-07-10 15:49:26.332993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.120 [2024-07-10 15:49:26.333010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.120 [2024-07-10 15:49:26.333033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.120 [2024-07-10 15:49:26.333050] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.120 [2024-07-10 15:49:26.333064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.120 [2024-07-10 15:49:26.333079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:47.120 [2024-07-10 15:49:26.333093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:47.120 [2024-07-10 15:49:26.333109] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f0810 is same with the state(5) to be set 00:25:47.120 [2024-07-10 15:49:26.342888] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f0810 (9): Bad file descriptor 00:25:47.120 [2024-07-10 15:49:26.352937] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:48.053 15:49:27 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:48.053 15:49:27 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:48.053 15:49:27 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:48.053 15:49:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.053 15:49:27 -- common/autotest_common.sh@10 -- # set +x 00:25:48.053 15:49:27 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:48.053 15:49:27 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:48.053 [2024-07-10 15:49:27.390479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:49.427 [2024-07-10 15:49:28.414475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:25:49.427 [2024-07-10 15:49:28.414552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16f0810 with addr=10.0.0.2, port=4420 00:25:49.427 [2024-07-10 15:49:28.414579] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16f0810 is same with the state(5) to be set 00:25:49.427 [2024-07-10 15:49:28.414618] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:49.427 [2024-07-10 15:49:28.414636] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:49.427 [2024-07-10 15:49:28.414650] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:49.427 [2024-07-10 15:49:28.414666] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:25:49.427 [2024-07-10 15:49:28.415109] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f0810 (9): Bad file descriptor 00:25:49.427 [2024-07-10 15:49:28.415156] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:49.427 [2024-07-10 15:49:28.415202] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:25:49.427 [2024-07-10 15:49:28.415244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.427 [2024-07-10 15:49:28.415267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.427 [2024-07-10 15:49:28.415286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.427 [2024-07-10 15:49:28.415301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.427 [2024-07-10 15:49:28.415316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.427 [2024-07-10 15:49:28.415340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.427 [2024-07-10 15:49:28.415357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.427 [2024-07-10 15:49:28.415372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.427 [2024-07-10 15:49:28.415388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:49.427 [2024-07-10 15:49:28.415402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:49.427 [2024-07-10 15:49:28.415419] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:25:49.427 [2024-07-10 15:49:28.415617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16f0c20 (9): Bad file descriptor 00:25:49.427 [2024-07-10 15:49:28.416633] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:25:49.427 [2024-07-10 15:49:28.416652] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:25:49.427 15:49:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:49.427 15:49:28 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:25:49.427 15:49:28 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:50.438 15:49:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:50.438 15:49:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.438 15:49:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:50.438 15:49:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.438 15:49:29 -- common/autotest_common.sh@10 -- # set +x 00:25:50.438 15:49:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:50.438 15:49:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:50.438 15:49:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.438 15:49:29 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:25:50.438 15:49:29 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.438 15:49:29 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.438 15:49:29 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:25:50.438 15:49:29 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:50.438 15:49:29 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:50.438 15:49:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:50.438 15:49:29 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:50.438 15:49:29 -- common/autotest_common.sh@10 -- # set +x 00:25:50.438 15:49:29 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:50.438 15:49:29 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:50.438 15:49:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:50.438 15:49:29 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:50.438 15:49:29 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:51.372 [2024-07-10 15:49:30.427610] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:51.372 [2024-07-10 15:49:30.427643] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:51.372 [2024-07-10 15:49:30.427666] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:51.372 15:49:30 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:51.372 15:49:30 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:51.372 15:49:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:51.372 15:49:30 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:51.372 15:49:30 -- common/autotest_common.sh@10 -- # set +x 00:25:51.372 15:49:30 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:51.372 15:49:30 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:51.372 [2024-07-10 15:49:30.555131] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:51.372 15:49:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:51.372 15:49:30 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:51.372 15:49:30 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:51.372 [2024-07-10 15:49:30.657265] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:51.372 [2024-07-10 15:49:30.657315] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:51.372 [2024-07-10 15:49:30.657352] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:51.372 [2024-07-10 15:49:30.657378] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:51.372 [2024-07-10 15:49:30.657394] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:51.372 [2024-07-10 15:49:30.666021] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x16ff1d0 was disconnected and freed. delete nvme_qpair. 00:25:52.305 15:49:31 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:52.305 15:49:31 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:52.305 15:49:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:52.305 15:49:31 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:52.305 15:49:31 -- common/autotest_common.sh@10 -- # set +x 00:25:52.305 15:49:31 -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:52.305 15:49:31 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:52.305 15:49:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:52.305 15:49:31 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:52.305 15:49:31 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:52.305 15:49:31 -- host/discovery_remove_ifc.sh@90 -- # killprocess 2220065 00:25:52.305 15:49:31 -- common/autotest_common.sh@926 -- # '[' -z 2220065 ']' 00:25:52.305 15:49:31 -- common/autotest_common.sh@930 -- # kill -0 2220065 00:25:52.305 15:49:31 -- common/autotest_common.sh@931 -- # uname 00:25:52.305 15:49:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:52.305 15:49:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2220065 00:25:52.305 15:49:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:52.305 15:49:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:52.305 15:49:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2220065' 00:25:52.305 killing process with pid 2220065 00:25:52.305 15:49:31 -- common/autotest_common.sh@945 -- # kill 2220065 00:25:52.305 15:49:31 -- common/autotest_common.sh@950 -- # wait 2220065 00:25:52.564 15:49:31 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:52.564 15:49:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:52.564 15:49:31 -- nvmf/common.sh@116 -- # sync 00:25:52.564 15:49:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:52.564 15:49:31 -- nvmf/common.sh@119 -- # set +e 00:25:52.564 15:49:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:52.564 15:49:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:52.564 rmmod nvme_tcp 00:25:52.564 rmmod nvme_fabrics 00:25:52.822 rmmod nvme_keyring 00:25:52.822 15:49:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:52.822 15:49:31 -- nvmf/common.sh@123 -- # set -e 00:25:52.822 15:49:31 -- nvmf/common.sh@124 -- # return 0 00:25:52.822 15:49:31 -- nvmf/common.sh@477 -- # '[' -n 2219903 ']' 00:25:52.822 15:49:31 -- nvmf/common.sh@478 -- # killprocess 2219903 00:25:52.822 15:49:31 -- common/autotest_common.sh@926 -- # '[' -z 2219903 ']' 00:25:52.822 15:49:31 -- common/autotest_common.sh@930 -- # kill -0 2219903 00:25:52.822 15:49:31 -- common/autotest_common.sh@931 -- # uname 00:25:52.822 15:49:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:52.822 15:49:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2219903 00:25:52.822 15:49:31 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:52.822 15:49:31 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:52.822 15:49:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2219903' 00:25:52.822 killing process with pid 2219903 00:25:52.822 15:49:31 -- common/autotest_common.sh@945 -- # kill 2219903 00:25:52.822 15:49:31 -- common/autotest_common.sh@950 -- # wait 2219903 00:25:53.081 15:49:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:53.081 15:49:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:53.081 15:49:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:53.081 15:49:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:53.081 15:49:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:53.081 15:49:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:53.081 15:49:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:53.081 15:49:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:54.985 15:49:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:54.985 00:25:54.985 real 0m18.914s 00:25:54.985 user 0m26.789s 00:25:54.985 sys 0m3.036s 00:25:54.985 15:49:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:54.985 15:49:34 -- common/autotest_common.sh@10 -- # set +x 00:25:54.985 ************************************ 00:25:54.985 END TEST nvmf_discovery_remove_ifc 00:25:54.985 ************************************ 00:25:54.985 15:49:34 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:25:54.985 15:49:34 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:54.985 15:49:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:54.985 15:49:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:54.985 15:49:34 -- common/autotest_common.sh@10 -- # set +x 00:25:54.985 ************************************ 00:25:54.985 START TEST nvmf_digest 00:25:54.985 ************************************ 00:25:54.985 15:49:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:25:55.245 * Looking for test storage... 00:25:55.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:55.245 15:49:34 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:55.245 15:49:34 -- nvmf/common.sh@7 -- # uname -s 00:25:55.245 15:49:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:55.245 15:49:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:55.245 15:49:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:55.245 15:49:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:55.245 15:49:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:55.245 15:49:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:55.245 15:49:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:55.245 15:49:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:55.245 15:49:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:55.245 15:49:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:55.245 15:49:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:55.245 15:49:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:55.245 15:49:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:55.245 15:49:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:55.245 15:49:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:55.245 15:49:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:55.245 15:49:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:55.245 15:49:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:55.245 15:49:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:55.245 15:49:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.245 15:49:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.245 15:49:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.245 15:49:34 -- paths/export.sh@5 -- # export PATH 00:25:55.245 15:49:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:55.245 15:49:34 -- nvmf/common.sh@46 -- # : 0 00:25:55.245 15:49:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:55.245 15:49:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:55.245 15:49:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:55.245 15:49:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:55.245 15:49:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:55.245 15:49:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:55.245 15:49:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:55.245 15:49:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:55.245 15:49:34 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:25:55.245 15:49:34 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:25:55.245 15:49:34 -- host/digest.sh@16 -- # runtime=2 00:25:55.245 15:49:34 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:25:55.245 15:49:34 -- host/digest.sh@132 -- # nvmftestinit 00:25:55.245 15:49:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:55.245 15:49:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:55.245 15:49:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:55.245 15:49:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:55.245 15:49:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:55.245 15:49:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:55.245 15:49:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:55.245 15:49:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:55.245 15:49:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:55.245 15:49:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:55.245 15:49:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:55.245 15:49:34 -- common/autotest_common.sh@10 -- # set +x 00:25:57.148 15:49:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:57.148 15:49:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:57.148 15:49:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:57.148 15:49:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:57.148 15:49:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:57.148 15:49:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:57.148 15:49:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:57.148 15:49:36 -- nvmf/common.sh@294 -- # net_devs=() 00:25:57.148 15:49:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:57.148 15:49:36 -- nvmf/common.sh@295 -- # e810=() 00:25:57.148 15:49:36 -- nvmf/common.sh@295 -- # local -ga e810 00:25:57.148 15:49:36 -- nvmf/common.sh@296 -- # x722=() 00:25:57.148 15:49:36 -- nvmf/common.sh@296 -- # local -ga x722 00:25:57.148 15:49:36 -- nvmf/common.sh@297 -- # mlx=() 00:25:57.148 15:49:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:57.148 15:49:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:57.148 15:49:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:57.148 15:49:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:57.148 15:49:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:57.148 15:49:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:57.148 15:49:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:57.148 15:49:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:57.148 15:49:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:57.148 15:49:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:57.148 15:49:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:57.148 15:49:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:57.148 15:49:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:57.148 15:49:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:57.148 15:49:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:57.148 15:49:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:57.148 15:49:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:57.148 15:49:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:57.148 15:49:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:57.148 15:49:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:57.148 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:57.148 15:49:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:57.148 15:49:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:57.148 15:49:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.148 15:49:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.148 15:49:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:57.148 15:49:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:57.148 15:49:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:57.148 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:57.148 15:49:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:57.148 15:49:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:57.148 15:49:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:57.148 15:49:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:57.148 15:49:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:57.148 15:49:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:57.148 15:49:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:57.148 15:49:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:57.148 15:49:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:57.148 15:49:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.148 15:49:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:57.148 15:49:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.148 15:49:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:57.148 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:57.148 15:49:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.148 15:49:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:57.148 15:49:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:57.148 15:49:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:57.148 15:49:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:57.148 15:49:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:57.148 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:57.148 15:49:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:57.148 15:49:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:57.148 15:49:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:57.148 15:49:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:57.148 15:49:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:57.148 15:49:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:57.148 15:49:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:57.148 15:49:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:57.148 15:49:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:57.148 15:49:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:57.148 15:49:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:57.148 15:49:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:57.148 15:49:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:57.148 15:49:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:57.148 15:49:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:57.148 15:49:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:57.148 15:49:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:57.148 15:49:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:57.148 15:49:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:57.148 15:49:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:57.148 15:49:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:57.148 15:49:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:57.148 15:49:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:57.148 15:49:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:57.148 15:49:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:57.148 15:49:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:57.148 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:57.148 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:25:57.148 00:25:57.148 --- 10.0.0.2 ping statistics --- 00:25:57.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.148 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:25:57.148 15:49:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:57.148 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:57.148 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:25:57.148 00:25:57.148 --- 10.0.0.1 ping statistics --- 00:25:57.148 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:57.148 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:25:57.148 15:49:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:57.148 15:49:36 -- nvmf/common.sh@410 -- # return 0 00:25:57.148 15:49:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:57.148 15:49:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:57.148 15:49:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:57.148 15:49:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:57.148 15:49:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:57.148 15:49:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:57.148 15:49:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:57.407 15:49:36 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:25:57.407 15:49:36 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:25:57.407 15:49:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:57.407 15:49:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:57.407 15:49:36 -- common/autotest_common.sh@10 -- # set +x 00:25:57.407 ************************************ 00:25:57.407 START TEST nvmf_digest_clean 00:25:57.407 ************************************ 00:25:57.407 15:49:36 -- common/autotest_common.sh@1104 -- # run_digest 00:25:57.407 15:49:36 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:25:57.407 15:49:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:57.407 15:49:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:57.407 15:49:36 -- common/autotest_common.sh@10 -- # set +x 00:25:57.407 15:49:36 -- nvmf/common.sh@469 -- # nvmfpid=2223706 00:25:57.407 15:49:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:57.407 15:49:36 -- nvmf/common.sh@470 -- # waitforlisten 2223706 00:25:57.407 15:49:36 -- common/autotest_common.sh@819 -- # '[' -z 2223706 ']' 00:25:57.407 15:49:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:57.407 15:49:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:57.407 15:49:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:57.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:57.407 15:49:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:57.407 15:49:36 -- common/autotest_common.sh@10 -- # set +x 00:25:57.407 [2024-07-10 15:49:36.598989] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:57.407 [2024-07-10 15:49:36.599075] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:57.407 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.407 [2024-07-10 15:49:36.663213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.407 [2024-07-10 15:49:36.768495] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:57.407 [2024-07-10 15:49:36.768673] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:57.407 [2024-07-10 15:49:36.768691] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:57.407 [2024-07-10 15:49:36.768703] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:57.407 [2024-07-10 15:49:36.768746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:57.665 15:49:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:57.665 15:49:36 -- common/autotest_common.sh@852 -- # return 0 00:25:57.665 15:49:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:57.665 15:49:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:57.665 15:49:36 -- common/autotest_common.sh@10 -- # set +x 00:25:57.665 15:49:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:57.665 15:49:36 -- host/digest.sh@120 -- # common_target_config 00:25:57.665 15:49:36 -- host/digest.sh@43 -- # rpc_cmd 00:25:57.665 15:49:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:57.665 15:49:36 -- common/autotest_common.sh@10 -- # set +x 00:25:57.665 null0 00:25:57.665 [2024-07-10 15:49:36.939224] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:57.665 [2024-07-10 15:49:36.963474] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.665 15:49:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:57.665 15:49:36 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:25:57.665 15:49:36 -- host/digest.sh@77 -- # local rw bs qd 00:25:57.665 15:49:36 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:57.665 15:49:36 -- host/digest.sh@80 -- # rw=randread 00:25:57.665 15:49:36 -- host/digest.sh@80 -- # bs=4096 00:25:57.665 15:49:36 -- host/digest.sh@80 -- # qd=128 00:25:57.665 15:49:36 -- host/digest.sh@82 -- # bperfpid=2223752 00:25:57.665 15:49:36 -- host/digest.sh@83 -- # waitforlisten 2223752 /var/tmp/bperf.sock 00:25:57.665 15:49:36 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:57.665 15:49:36 -- common/autotest_common.sh@819 -- # '[' -z 2223752 ']' 00:25:57.665 15:49:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:57.665 15:49:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:57.665 15:49:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:57.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:57.665 15:49:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:57.665 15:49:36 -- common/autotest_common.sh@10 -- # set +x 00:25:57.665 [2024-07-10 15:49:37.009022] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:57.665 [2024-07-10 15:49:37.009115] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2223752 ] 00:25:57.665 EAL: No free 2048 kB hugepages reported on node 1 00:25:57.923 [2024-07-10 15:49:37.073134] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:57.923 [2024-07-10 15:49:37.180193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:57.923 15:49:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:57.923 15:49:37 -- common/autotest_common.sh@852 -- # return 0 00:25:57.923 15:49:37 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:25:57.923 15:49:37 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:25:57.923 15:49:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:58.489 15:49:37 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:58.489 15:49:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:58.747 nvme0n1 00:25:58.747 15:49:37 -- host/digest.sh@91 -- # bperf_py perform_tests 00:25:58.747 15:49:37 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:58.747 Running I/O for 2 seconds... 00:26:01.277 00:26:01.277 Latency(us) 00:26:01.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.277 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:01.277 nvme0n1 : 2.04 15617.43 61.01 0.00 0.00 8028.93 2500.08 48545.19 00:26:01.277 =================================================================================================================== 00:26:01.277 Total : 15617.43 61.01 0.00 0.00 8028.93 2500.08 48545.19 00:26:01.277 0 00:26:01.277 15:49:40 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:01.277 15:49:40 -- host/digest.sh@92 -- # get_accel_stats 00:26:01.277 15:49:40 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:01.277 15:49:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:01.277 15:49:40 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:01.277 | select(.opcode=="crc32c") 00:26:01.277 | "\(.module_name) \(.executed)"' 00:26:01.277 15:49:40 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:01.277 15:49:40 -- host/digest.sh@93 -- # exp_module=software 00:26:01.277 15:49:40 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:01.277 15:49:40 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:01.277 15:49:40 -- host/digest.sh@97 -- # killprocess 2223752 00:26:01.277 15:49:40 -- common/autotest_common.sh@926 -- # '[' -z 2223752 ']' 00:26:01.277 15:49:40 -- common/autotest_common.sh@930 -- # kill -0 2223752 00:26:01.277 15:49:40 -- common/autotest_common.sh@931 -- # uname 00:26:01.277 15:49:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:01.277 15:49:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2223752 00:26:01.277 15:49:40 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:01.277 15:49:40 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:01.277 15:49:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2223752' 00:26:01.277 killing process with pid 2223752 00:26:01.277 15:49:40 -- common/autotest_common.sh@945 -- # kill 2223752 00:26:01.277 Received shutdown signal, test time was about 2.000000 seconds 00:26:01.277 00:26:01.277 Latency(us) 00:26:01.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.277 =================================================================================================================== 00:26:01.277 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:01.277 15:49:40 -- common/autotest_common.sh@950 -- # wait 2223752 00:26:01.277 15:49:40 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:26:01.277 15:49:40 -- host/digest.sh@77 -- # local rw bs qd 00:26:01.277 15:49:40 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:01.277 15:49:40 -- host/digest.sh@80 -- # rw=randread 00:26:01.277 15:49:40 -- host/digest.sh@80 -- # bs=131072 00:26:01.277 15:49:40 -- host/digest.sh@80 -- # qd=16 00:26:01.277 15:49:40 -- host/digest.sh@82 -- # bperfpid=2224278 00:26:01.277 15:49:40 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:01.277 15:49:40 -- host/digest.sh@83 -- # waitforlisten 2224278 /var/tmp/bperf.sock 00:26:01.277 15:49:40 -- common/autotest_common.sh@819 -- # '[' -z 2224278 ']' 00:26:01.277 15:49:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:01.277 15:49:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:01.277 15:49:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:01.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:01.277 15:49:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:01.277 15:49:40 -- common/autotest_common.sh@10 -- # set +x 00:26:01.277 [2024-07-10 15:49:40.641673] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:01.277 [2024-07-10 15:49:40.641777] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224278 ] 00:26:01.277 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:01.277 Zero copy mechanism will not be used. 00:26:01.535 EAL: No free 2048 kB hugepages reported on node 1 00:26:01.535 [2024-07-10 15:49:40.700604] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.535 [2024-07-10 15:49:40.806583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:01.535 15:49:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:01.535 15:49:40 -- common/autotest_common.sh@852 -- # return 0 00:26:01.535 15:49:40 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:01.535 15:49:40 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:01.535 15:49:40 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:02.098 15:49:41 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:02.098 15:49:41 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:02.356 nvme0n1 00:26:02.356 15:49:41 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:02.356 15:49:41 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:02.614 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:02.614 Zero copy mechanism will not be used. 00:26:02.614 Running I/O for 2 seconds... 00:26:04.512 00:26:04.512 Latency(us) 00:26:04.512 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.512 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:04.512 nvme0n1 : 2.00 2467.97 308.50 0.00 0.00 6478.69 1165.08 11845.03 00:26:04.512 =================================================================================================================== 00:26:04.512 Total : 2467.97 308.50 0.00 0.00 6478.69 1165.08 11845.03 00:26:04.512 0 00:26:04.512 15:49:43 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:04.512 15:49:43 -- host/digest.sh@92 -- # get_accel_stats 00:26:04.512 15:49:43 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:04.512 15:49:43 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:04.512 15:49:43 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:04.512 | select(.opcode=="crc32c") 00:26:04.512 | "\(.module_name) \(.executed)"' 00:26:04.769 15:49:44 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:04.769 15:49:44 -- host/digest.sh@93 -- # exp_module=software 00:26:04.769 15:49:44 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:04.769 15:49:44 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:04.769 15:49:44 -- host/digest.sh@97 -- # killprocess 2224278 00:26:04.770 15:49:44 -- common/autotest_common.sh@926 -- # '[' -z 2224278 ']' 00:26:04.770 15:49:44 -- common/autotest_common.sh@930 -- # kill -0 2224278 00:26:04.770 15:49:44 -- common/autotest_common.sh@931 -- # uname 00:26:04.770 15:49:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:04.770 15:49:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2224278 00:26:04.770 15:49:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:04.770 15:49:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:04.770 15:49:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2224278' 00:26:04.770 killing process with pid 2224278 00:26:04.770 15:49:44 -- common/autotest_common.sh@945 -- # kill 2224278 00:26:04.770 Received shutdown signal, test time was about 2.000000 seconds 00:26:04.770 00:26:04.770 Latency(us) 00:26:04.770 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.770 =================================================================================================================== 00:26:04.770 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:04.770 15:49:44 -- common/autotest_common.sh@950 -- # wait 2224278 00:26:05.027 15:49:44 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:26:05.027 15:49:44 -- host/digest.sh@77 -- # local rw bs qd 00:26:05.027 15:49:44 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:05.027 15:49:44 -- host/digest.sh@80 -- # rw=randwrite 00:26:05.027 15:49:44 -- host/digest.sh@80 -- # bs=4096 00:26:05.027 15:49:44 -- host/digest.sh@80 -- # qd=128 00:26:05.027 15:49:44 -- host/digest.sh@82 -- # bperfpid=2224714 00:26:05.027 15:49:44 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:05.027 15:49:44 -- host/digest.sh@83 -- # waitforlisten 2224714 /var/tmp/bperf.sock 00:26:05.027 15:49:44 -- common/autotest_common.sh@819 -- # '[' -z 2224714 ']' 00:26:05.027 15:49:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:05.027 15:49:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:05.027 15:49:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:05.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:05.027 15:49:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:05.027 15:49:44 -- common/autotest_common.sh@10 -- # set +x 00:26:05.027 [2024-07-10 15:49:44.360460] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:05.027 [2024-07-10 15:49:44.360534] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2224714 ] 00:26:05.027 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.285 [2024-07-10 15:49:44.419343] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:05.285 [2024-07-10 15:49:44.525200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:05.285 15:49:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:05.285 15:49:44 -- common/autotest_common.sh@852 -- # return 0 00:26:05.285 15:49:44 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:05.285 15:49:44 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:05.285 15:49:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:05.542 15:49:44 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:05.542 15:49:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:06.107 nvme0n1 00:26:06.107 15:49:45 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:06.107 15:49:45 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:06.107 Running I/O for 2 seconds... 00:26:08.635 00:26:08.635 Latency(us) 00:26:08.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.635 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:08.635 nvme0n1 : 2.01 20002.84 78.14 0.00 0.00 6385.38 3203.98 10048.85 00:26:08.635 =================================================================================================================== 00:26:08.635 Total : 20002.84 78.14 0.00 0.00 6385.38 3203.98 10048.85 00:26:08.635 0 00:26:08.635 15:49:47 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:08.635 15:49:47 -- host/digest.sh@92 -- # get_accel_stats 00:26:08.635 15:49:47 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:08.635 15:49:47 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:08.635 | select(.opcode=="crc32c") 00:26:08.635 | "\(.module_name) \(.executed)"' 00:26:08.635 15:49:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:08.635 15:49:47 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:08.635 15:49:47 -- host/digest.sh@93 -- # exp_module=software 00:26:08.635 15:49:47 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:08.635 15:49:47 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:08.635 15:49:47 -- host/digest.sh@97 -- # killprocess 2224714 00:26:08.635 15:49:47 -- common/autotest_common.sh@926 -- # '[' -z 2224714 ']' 00:26:08.635 15:49:47 -- common/autotest_common.sh@930 -- # kill -0 2224714 00:26:08.635 15:49:47 -- common/autotest_common.sh@931 -- # uname 00:26:08.635 15:49:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:08.635 15:49:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2224714 00:26:08.635 15:49:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:08.635 15:49:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:08.635 15:49:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2224714' 00:26:08.635 killing process with pid 2224714 00:26:08.635 15:49:47 -- common/autotest_common.sh@945 -- # kill 2224714 00:26:08.635 Received shutdown signal, test time was about 2.000000 seconds 00:26:08.635 00:26:08.635 Latency(us) 00:26:08.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.635 =================================================================================================================== 00:26:08.635 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:08.635 15:49:47 -- common/autotest_common.sh@950 -- # wait 2224714 00:26:08.635 15:49:47 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:26:08.635 15:49:47 -- host/digest.sh@77 -- # local rw bs qd 00:26:08.635 15:49:47 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:08.635 15:49:47 -- host/digest.sh@80 -- # rw=randwrite 00:26:08.635 15:49:47 -- host/digest.sh@80 -- # bs=131072 00:26:08.635 15:49:47 -- host/digest.sh@80 -- # qd=16 00:26:08.635 15:49:47 -- host/digest.sh@82 -- # bperfpid=2225136 00:26:08.635 15:49:47 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:08.635 15:49:47 -- host/digest.sh@83 -- # waitforlisten 2225136 /var/tmp/bperf.sock 00:26:08.635 15:49:47 -- common/autotest_common.sh@819 -- # '[' -z 2225136 ']' 00:26:08.635 15:49:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:08.635 15:49:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:08.635 15:49:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:08.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:08.635 15:49:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:08.635 15:49:47 -- common/autotest_common.sh@10 -- # set +x 00:26:08.893 [2024-07-10 15:49:48.029122] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:08.893 [2024-07-10 15:49:48.029211] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225136 ] 00:26:08.893 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:08.893 Zero copy mechanism will not be used. 00:26:08.893 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.893 [2024-07-10 15:49:48.089190] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.893 [2024-07-10 15:49:48.196674] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:08.893 15:49:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:08.893 15:49:48 -- common/autotest_common.sh@852 -- # return 0 00:26:08.893 15:49:48 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:26:08.893 15:49:48 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:26:08.893 15:49:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:09.457 15:49:48 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:09.457 15:49:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:09.713 nvme0n1 00:26:09.713 15:49:48 -- host/digest.sh@91 -- # bperf_py perform_tests 00:26:09.713 15:49:48 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:09.713 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:09.713 Zero copy mechanism will not be used. 00:26:09.713 Running I/O for 2 seconds... 00:26:11.611 00:26:11.611 Latency(us) 00:26:11.611 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:11.611 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:11.611 nvme0n1 : 2.01 2054.28 256.79 0.00 0.00 7769.53 5364.24 17185.00 00:26:11.611 =================================================================================================================== 00:26:11.611 Total : 2054.28 256.79 0.00 0.00 7769.53 5364.24 17185.00 00:26:11.611 0 00:26:11.869 15:49:50 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:26:11.869 15:49:50 -- host/digest.sh@92 -- # get_accel_stats 00:26:11.869 15:49:50 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:11.869 15:49:50 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:11.869 | select(.opcode=="crc32c") 00:26:11.869 | "\(.module_name) \(.executed)"' 00:26:11.869 15:49:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:11.869 15:49:51 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:26:11.869 15:49:51 -- host/digest.sh@93 -- # exp_module=software 00:26:11.869 15:49:51 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:26:11.869 15:49:51 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:11.869 15:49:51 -- host/digest.sh@97 -- # killprocess 2225136 00:26:11.869 15:49:51 -- common/autotest_common.sh@926 -- # '[' -z 2225136 ']' 00:26:11.869 15:49:51 -- common/autotest_common.sh@930 -- # kill -0 2225136 00:26:11.869 15:49:51 -- common/autotest_common.sh@931 -- # uname 00:26:11.869 15:49:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:11.869 15:49:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2225136 00:26:12.127 15:49:51 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:12.127 15:49:51 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:12.127 15:49:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2225136' 00:26:12.127 killing process with pid 2225136 00:26:12.127 15:49:51 -- common/autotest_common.sh@945 -- # kill 2225136 00:26:12.127 Received shutdown signal, test time was about 2.000000 seconds 00:26:12.127 00:26:12.127 Latency(us) 00:26:12.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.127 =================================================================================================================== 00:26:12.127 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:12.127 15:49:51 -- common/autotest_common.sh@950 -- # wait 2225136 00:26:12.385 15:49:51 -- host/digest.sh@126 -- # killprocess 2223706 00:26:12.385 15:49:51 -- common/autotest_common.sh@926 -- # '[' -z 2223706 ']' 00:26:12.385 15:49:51 -- common/autotest_common.sh@930 -- # kill -0 2223706 00:26:12.385 15:49:51 -- common/autotest_common.sh@931 -- # uname 00:26:12.385 15:49:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:12.385 15:49:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2223706 00:26:12.385 15:49:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:12.385 15:49:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:12.385 15:49:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2223706' 00:26:12.385 killing process with pid 2223706 00:26:12.385 15:49:51 -- common/autotest_common.sh@945 -- # kill 2223706 00:26:12.385 15:49:51 -- common/autotest_common.sh@950 -- # wait 2223706 00:26:12.644 00:26:12.644 real 0m15.268s 00:26:12.644 user 0m28.616s 00:26:12.644 sys 0m4.288s 00:26:12.644 15:49:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:12.644 15:49:51 -- common/autotest_common.sh@10 -- # set +x 00:26:12.644 ************************************ 00:26:12.644 END TEST nvmf_digest_clean 00:26:12.644 ************************************ 00:26:12.644 15:49:51 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:26:12.644 15:49:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:12.644 15:49:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:12.644 15:49:51 -- common/autotest_common.sh@10 -- # set +x 00:26:12.644 ************************************ 00:26:12.644 START TEST nvmf_digest_error 00:26:12.644 ************************************ 00:26:12.644 15:49:51 -- common/autotest_common.sh@1104 -- # run_digest_error 00:26:12.644 15:49:51 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:26:12.644 15:49:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:12.644 15:49:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:12.644 15:49:51 -- common/autotest_common.sh@10 -- # set +x 00:26:12.644 15:49:51 -- nvmf/common.sh@469 -- # nvmfpid=2225698 00:26:12.644 15:49:51 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:12.644 15:49:51 -- nvmf/common.sh@470 -- # waitforlisten 2225698 00:26:12.644 15:49:51 -- common/autotest_common.sh@819 -- # '[' -z 2225698 ']' 00:26:12.644 15:49:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.644 15:49:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:12.644 15:49:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.644 15:49:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:12.644 15:49:51 -- common/autotest_common.sh@10 -- # set +x 00:26:12.644 [2024-07-10 15:49:51.887314] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:12.644 [2024-07-10 15:49:51.887384] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.644 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.644 [2024-07-10 15:49:51.949814] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.902 [2024-07-10 15:49:52.057032] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:12.902 [2024-07-10 15:49:52.057162] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.902 [2024-07-10 15:49:52.057178] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.902 [2024-07-10 15:49:52.057189] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.902 [2024-07-10 15:49:52.057216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.902 15:49:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:12.902 15:49:52 -- common/autotest_common.sh@852 -- # return 0 00:26:12.902 15:49:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:12.902 15:49:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:12.903 15:49:52 -- common/autotest_common.sh@10 -- # set +x 00:26:12.903 15:49:52 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:12.903 15:49:52 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:12.903 15:49:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:12.903 15:49:52 -- common/autotest_common.sh@10 -- # set +x 00:26:12.903 [2024-07-10 15:49:52.165883] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:12.903 15:49:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:12.903 15:49:52 -- host/digest.sh@104 -- # common_target_config 00:26:12.903 15:49:52 -- host/digest.sh@43 -- # rpc_cmd 00:26:12.903 15:49:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:12.903 15:49:52 -- common/autotest_common.sh@10 -- # set +x 00:26:13.161 null0 00:26:13.161 [2024-07-10 15:49:52.284697] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:13.161 [2024-07-10 15:49:52.308918] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:13.161 15:49:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:13.161 15:49:52 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:26:13.161 15:49:52 -- host/digest.sh@54 -- # local rw bs qd 00:26:13.161 15:49:52 -- host/digest.sh@56 -- # rw=randread 00:26:13.161 15:49:52 -- host/digest.sh@56 -- # bs=4096 00:26:13.161 15:49:52 -- host/digest.sh@56 -- # qd=128 00:26:13.161 15:49:52 -- host/digest.sh@58 -- # bperfpid=2225728 00:26:13.161 15:49:52 -- host/digest.sh@60 -- # waitforlisten 2225728 /var/tmp/bperf.sock 00:26:13.161 15:49:52 -- common/autotest_common.sh@819 -- # '[' -z 2225728 ']' 00:26:13.161 15:49:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:13.161 15:49:52 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:13.161 15:49:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:13.161 15:49:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:13.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:13.161 15:49:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:13.161 15:49:52 -- common/autotest_common.sh@10 -- # set +x 00:26:13.161 [2024-07-10 15:49:52.354397] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:13.161 [2024-07-10 15:49:52.354483] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225728 ] 00:26:13.161 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.161 [2024-07-10 15:49:52.411382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.161 [2024-07-10 15:49:52.516452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.097 15:49:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:14.097 15:49:53 -- common/autotest_common.sh@852 -- # return 0 00:26:14.097 15:49:53 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:14.097 15:49:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:14.355 15:49:53 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:14.355 15:49:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.355 15:49:53 -- common/autotest_common.sh@10 -- # set +x 00:26:14.355 15:49:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.355 15:49:53 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.355 15:49:53 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.921 nvme0n1 00:26:14.921 15:49:54 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:14.921 15:49:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:14.921 15:49:54 -- common/autotest_common.sh@10 -- # set +x 00:26:14.921 15:49:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:14.921 15:49:54 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:14.921 15:49:54 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:14.921 Running I/O for 2 seconds... 00:26:14.921 [2024-07-10 15:49:54.257983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:14.921 [2024-07-10 15:49:54.258051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.921 [2024-07-10 15:49:54.258070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.921 [2024-07-10 15:49:54.272391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:14.921 [2024-07-10 15:49:54.272422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.921 [2024-07-10 15:49:54.272449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:14.921 [2024-07-10 15:49:54.288081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:14.921 [2024-07-10 15:49:54.288113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.921 [2024-07-10 15:49:54.288137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.179 [2024-07-10 15:49:54.303789] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.179 [2024-07-10 15:49:54.303823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.179 [2024-07-10 15:49:54.303855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.179 [2024-07-10 15:49:54.319252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.179 [2024-07-10 15:49:54.319284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.179 [2024-07-10 15:49:54.319301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.179 [2024-07-10 15:49:54.332733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.179 [2024-07-10 15:49:54.332765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.179 [2024-07-10 15:49:54.332784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.179 [2024-07-10 15:49:54.343805] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.179 [2024-07-10 15:49:54.343852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.179 [2024-07-10 15:49:54.343869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.179 [2024-07-10 15:49:54.358286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.179 [2024-07-10 15:49:54.358315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.179 [2024-07-10 15:49:54.358331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.179 [2024-07-10 15:49:54.373873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.179 [2024-07-10 15:49:54.373901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.179 [2024-07-10 15:49:54.373917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.179 [2024-07-10 15:49:54.390316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.179 [2024-07-10 15:49:54.390351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.179 [2024-07-10 15:49:54.390371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.179 [2024-07-10 15:49:54.407632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.179 [2024-07-10 15:49:54.407660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.179 [2024-07-10 15:49:54.407676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.179 [2024-07-10 15:49:54.424277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.179 [2024-07-10 15:49:54.424318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:9836 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.179 [2024-07-10 15:49:54.424338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.180 [2024-07-10 15:49:54.441964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.180 [2024-07-10 15:49:54.441999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.180 [2024-07-10 15:49:54.442018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.180 [2024-07-10 15:49:54.459436] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.180 [2024-07-10 15:49:54.459482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.180 [2024-07-10 15:49:54.459498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.180 [2024-07-10 15:49:54.476281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.180 [2024-07-10 15:49:54.476316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.180 [2024-07-10 15:49:54.476335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.180 [2024-07-10 15:49:54.493170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.180 [2024-07-10 15:49:54.493205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.180 [2024-07-10 15:49:54.493224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.180 [2024-07-10 15:49:54.510115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.180 [2024-07-10 15:49:54.510150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.180 [2024-07-10 15:49:54.510169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.180 [2024-07-10 15:49:54.527688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.180 [2024-07-10 15:49:54.527738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:8626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.180 [2024-07-10 15:49:54.527758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.180 [2024-07-10 15:49:54.545389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.180 [2024-07-10 15:49:54.545431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.180 [2024-07-10 15:49:54.545452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.438 [2024-07-10 15:49:54.561743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.438 [2024-07-10 15:49:54.561780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.438 [2024-07-10 15:49:54.561801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.438 [2024-07-10 15:49:54.579378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.438 [2024-07-10 15:49:54.579414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.438 [2024-07-10 15:49:54.579443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.438 [2024-07-10 15:49:54.590606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.438 [2024-07-10 15:49:54.590635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:18083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.438 [2024-07-10 15:49:54.590651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.438 [2024-07-10 15:49:54.607101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.438 [2024-07-10 15:49:54.607136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.438 [2024-07-10 15:49:54.607157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.438 [2024-07-10 15:49:54.623833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.438 [2024-07-10 15:49:54.623868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.438 [2024-07-10 15:49:54.623887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.438 [2024-07-10 15:49:54.640818] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.438 [2024-07-10 15:49:54.640854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.438 [2024-07-10 15:49:54.640873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.438 [2024-07-10 15:49:54.658674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.438 [2024-07-10 15:49:54.658722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.438 [2024-07-10 15:49:54.658742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.438 [2024-07-10 15:49:54.676379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.438 [2024-07-10 15:49:54.676414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.438 [2024-07-10 15:49:54.676443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.438 [2024-07-10 15:49:54.693091] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.438 [2024-07-10 15:49:54.693126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.438 [2024-07-10 15:49:54.693145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.438 [2024-07-10 15:49:54.710665] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.438 [2024-07-10 15:49:54.710694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.438 [2024-07-10 15:49:54.710714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.438 [2024-07-10 15:49:54.727786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.438 [2024-07-10 15:49:54.727821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:12434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.438 [2024-07-10 15:49:54.727841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.438 [2024-07-10 15:49:54.744797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.439 [2024-07-10 15:49:54.744832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.439 [2024-07-10 15:49:54.744851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.439 [2024-07-10 15:49:54.760951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.439 [2024-07-10 15:49:54.760985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.439 [2024-07-10 15:49:54.761005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.439 [2024-07-10 15:49:54.773093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.439 [2024-07-10 15:49:54.773129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.439 [2024-07-10 15:49:54.773148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.439 [2024-07-10 15:49:54.789893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.439 [2024-07-10 15:49:54.789929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:10005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.439 [2024-07-10 15:49:54.789949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.439 [2024-07-10 15:49:54.806694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.439 [2024-07-10 15:49:54.806725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.439 [2024-07-10 15:49:54.806761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-10 15:49:54.823482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.697 [2024-07-10 15:49:54.823513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:3862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-10 15:49:54.823529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-10 15:49:54.841468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.697 [2024-07-10 15:49:54.841500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:12910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-10 15:49:54.841518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-10 15:49:54.858521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.697 [2024-07-10 15:49:54.858559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-10 15:49:54.858576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-10 15:49:54.875369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.697 [2024-07-10 15:49:54.875404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-10 15:49:54.875430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-10 15:49:54.892073] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.697 [2024-07-10 15:49:54.892108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20668 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-10 15:49:54.892127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-10 15:49:54.908902] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.697 [2024-07-10 15:49:54.908938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6577 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-10 15:49:54.908957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-10 15:49:54.926229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.697 [2024-07-10 15:49:54.926263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-10 15:49:54.926283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-10 15:49:54.943249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.697 [2024-07-10 15:49:54.943284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-10 15:49:54.943303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-10 15:49:54.960364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.697 [2024-07-10 15:49:54.960398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:12550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-10 15:49:54.960417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-10 15:49:54.978433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.697 [2024-07-10 15:49:54.978474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-10 15:49:54.978508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-10 15:49:54.994653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.697 [2024-07-10 15:49:54.994685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:24456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-10 15:49:54.994718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-10 15:49:55.012287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.697 [2024-07-10 15:49:55.012322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-10 15:49:55.012341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-10 15:49:55.023557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.697 [2024-07-10 15:49:55.023589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-10 15:49:55.023606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-10 15:49:55.040441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.697 [2024-07-10 15:49:55.040486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-10 15:49:55.040502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.697 [2024-07-10 15:49:55.058547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.697 [2024-07-10 15:49:55.058579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:9074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.697 [2024-07-10 15:49:55.058596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.955 [2024-07-10 15:49:55.075494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.955 [2024-07-10 15:49:55.075527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.955 [2024-07-10 15:49:55.075545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.955 [2024-07-10 15:49:55.093335] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.955 [2024-07-10 15:49:55.093372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.955 [2024-07-10 15:49:55.093391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.955 [2024-07-10 15:49:55.110036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.955 [2024-07-10 15:49:55.110072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.956 [2024-07-10 15:49:55.110091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.956 [2024-07-10 15:49:55.126390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.956 [2024-07-10 15:49:55.126433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.956 [2024-07-10 15:49:55.126469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.956 [2024-07-10 15:49:55.143760] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.956 [2024-07-10 15:49:55.143814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.956 [2024-07-10 15:49:55.143835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.956 [2024-07-10 15:49:55.162203] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.956 [2024-07-10 15:49:55.162238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.956 [2024-07-10 15:49:55.162257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.956 [2024-07-10 15:49:55.179586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.956 [2024-07-10 15:49:55.179616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.956 [2024-07-10 15:49:55.179633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.956 [2024-07-10 15:49:55.196480] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.956 [2024-07-10 15:49:55.196515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.956 [2024-07-10 15:49:55.196548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.956 [2024-07-10 15:49:55.213896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.956 [2024-07-10 15:49:55.213930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.956 [2024-07-10 15:49:55.213951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.956 [2024-07-10 15:49:55.231755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.956 [2024-07-10 15:49:55.231804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.956 [2024-07-10 15:49:55.231823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.956 [2024-07-10 15:49:55.249311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.956 [2024-07-10 15:49:55.249346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.956 [2024-07-10 15:49:55.249365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.956 [2024-07-10 15:49:55.265588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.956 [2024-07-10 15:49:55.265619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.956 [2024-07-10 15:49:55.265636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.956 [2024-07-10 15:49:55.276720] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.956 [2024-07-10 15:49:55.276748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.956 [2024-07-10 15:49:55.276763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.956 [2024-07-10 15:49:55.293502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.956 [2024-07-10 15:49:55.293532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:10071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.956 [2024-07-10 15:49:55.293548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.956 [2024-07-10 15:49:55.309595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.956 [2024-07-10 15:49:55.309624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.956 [2024-07-10 15:49:55.309642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:15.956 [2024-07-10 15:49:55.325594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:15.956 [2024-07-10 15:49:55.325623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:15.956 [2024-07-10 15:49:55.325638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.214 [2024-07-10 15:49:55.343376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.214 [2024-07-10 15:49:55.343412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.214 [2024-07-10 15:49:55.343444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.214 [2024-07-10 15:49:55.359988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.214 [2024-07-10 15:49:55.360023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:23593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.214 [2024-07-10 15:49:55.360042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.214 [2024-07-10 15:49:55.377568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.214 [2024-07-10 15:49:55.377597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.214 [2024-07-10 15:49:55.377612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.214 [2024-07-10 15:49:55.393735] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.214 [2024-07-10 15:49:55.393770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.214 [2024-07-10 15:49:55.393789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.214 [2024-07-10 15:49:55.411345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.214 [2024-07-10 15:49:55.411379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:22978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.214 [2024-07-10 15:49:55.411398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.214 [2024-07-10 15:49:55.428052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.214 [2024-07-10 15:49:55.428085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.214 [2024-07-10 15:49:55.428110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.214 [2024-07-10 15:49:55.444536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.214 [2024-07-10 15:49:55.444575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.214 [2024-07-10 15:49:55.444593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.214 [2024-07-10 15:49:55.461200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.214 [2024-07-10 15:49:55.461233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:2947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.214 [2024-07-10 15:49:55.461252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.214 [2024-07-10 15:49:55.473003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.214 [2024-07-10 15:49:55.473036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:7649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.214 [2024-07-10 15:49:55.473055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.214 [2024-07-10 15:49:55.490039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.214 [2024-07-10 15:49:55.490072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:3063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.214 [2024-07-10 15:49:55.490091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.214 [2024-07-10 15:49:55.505689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.214 [2024-07-10 15:49:55.505736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.214 [2024-07-10 15:49:55.505755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.214 [2024-07-10 15:49:55.522487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.214 [2024-07-10 15:49:55.522515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.214 [2024-07-10 15:49:55.522530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.214 [2024-07-10 15:49:55.539024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.214 [2024-07-10 15:49:55.539059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:16294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.214 [2024-07-10 15:49:55.539077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.214 [2024-07-10 15:49:55.555920] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.214 [2024-07-10 15:49:55.555954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.214 [2024-07-10 15:49:55.555973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.214 [2024-07-10 15:49:55.572097] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.214 [2024-07-10 15:49:55.572137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:15020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.214 [2024-07-10 15:49:55.572157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.214 [2024-07-10 15:49:55.583089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.214 [2024-07-10 15:49:55.583121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.214 [2024-07-10 15:49:55.583139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.472 [2024-07-10 15:49:55.600666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.472 [2024-07-10 15:49:55.600696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7863 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.472 [2024-07-10 15:49:55.600727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.472 [2024-07-10 15:49:55.617494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.472 [2024-07-10 15:49:55.617526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.473 [2024-07-10 15:49:55.617544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.473 [2024-07-10 15:49:55.632876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.473 [2024-07-10 15:49:55.632911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.473 [2024-07-10 15:49:55.632930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.473 [2024-07-10 15:49:55.650498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.473 [2024-07-10 15:49:55.650527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.473 [2024-07-10 15:49:55.650543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.473 [2024-07-10 15:49:55.667612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.473 [2024-07-10 15:49:55.667640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.473 [2024-07-10 15:49:55.667655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.473 [2024-07-10 15:49:55.684369] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.473 [2024-07-10 15:49:55.684403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.473 [2024-07-10 15:49:55.684422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.473 [2024-07-10 15:49:55.701684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.473 [2024-07-10 15:49:55.701713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.473 [2024-07-10 15:49:55.701729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.473 [2024-07-10 15:49:55.719643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.473 [2024-07-10 15:49:55.719671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8180 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.473 [2024-07-10 15:49:55.719686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.473 [2024-07-10 15:49:55.736991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.473 [2024-07-10 15:49:55.737024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.473 [2024-07-10 15:49:55.737043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.473 [2024-07-10 15:49:55.754478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.473 [2024-07-10 15:49:55.754508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.473 [2024-07-10 15:49:55.754525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.473 [2024-07-10 15:49:55.771391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.473 [2024-07-10 15:49:55.771433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.473 [2024-07-10 15:49:55.771454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.473 [2024-07-10 15:49:55.787660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.473 [2024-07-10 15:49:55.787690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.473 [2024-07-10 15:49:55.787721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.473 [2024-07-10 15:49:55.798859] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.473 [2024-07-10 15:49:55.798892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.473 [2024-07-10 15:49:55.798911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.473 [2024-07-10 15:49:55.815559] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.473 [2024-07-10 15:49:55.815587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.473 [2024-07-10 15:49:55.815602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.473 [2024-07-10 15:49:55.831291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.473 [2024-07-10 15:49:55.831325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.473 [2024-07-10 15:49:55.831343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.473 [2024-07-10 15:49:55.848450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.473 [2024-07-10 15:49:55.848497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.473 [2024-07-10 15:49:55.848521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.731 [2024-07-10 15:49:55.864016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.731 [2024-07-10 15:49:55.864049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:6193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.731 [2024-07-10 15:49:55.864066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.731 [2024-07-10 15:49:55.882344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.731 [2024-07-10 15:49:55.882379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.731 [2024-07-10 15:49:55.882398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.731 [2024-07-10 15:49:55.899500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.731 [2024-07-10 15:49:55.899527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.731 [2024-07-10 15:49:55.899543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.731 [2024-07-10 15:49:55.915096] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.731 [2024-07-10 15:49:55.915129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.731 [2024-07-10 15:49:55.915148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.731 [2024-07-10 15:49:55.932971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.731 [2024-07-10 15:49:55.933005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:11200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.731 [2024-07-10 15:49:55.933024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.731 [2024-07-10 15:49:55.949074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.731 [2024-07-10 15:49:55.949107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.731 [2024-07-10 15:49:55.949126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.731 [2024-07-10 15:49:55.966971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.731 [2024-07-10 15:49:55.967004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.731 [2024-07-10 15:49:55.967023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.731 [2024-07-10 15:49:55.981231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.731 [2024-07-10 15:49:55.981264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:25262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.731 [2024-07-10 15:49:55.981282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.731 [2024-07-10 15:49:55.994068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.731 [2024-07-10 15:49:55.994102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.731 [2024-07-10 15:49:55.994120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.731 [2024-07-10 15:49:56.010353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.732 [2024-07-10 15:49:56.010387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.732 [2024-07-10 15:49:56.010405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.732 [2024-07-10 15:49:56.027119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.732 [2024-07-10 15:49:56.027153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7069 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.732 [2024-07-10 15:49:56.027172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.732 [2024-07-10 15:49:56.044255] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.732 [2024-07-10 15:49:56.044289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.732 [2024-07-10 15:49:56.044307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.732 [2024-07-10 15:49:56.061252] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.732 [2024-07-10 15:49:56.061286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.732 [2024-07-10 15:49:56.061305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.732 [2024-07-10 15:49:56.078922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.732 [2024-07-10 15:49:56.078957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.732 [2024-07-10 15:49:56.078975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.732 [2024-07-10 15:49:56.096500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.732 [2024-07-10 15:49:56.096527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.732 [2024-07-10 15:49:56.096542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.989 [2024-07-10 15:49:56.112983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.989 [2024-07-10 15:49:56.113019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.989 [2024-07-10 15:49:56.113039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.989 [2024-07-10 15:49:56.130287] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.989 [2024-07-10 15:49:56.130325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:9187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.989 [2024-07-10 15:49:56.130351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.989 [2024-07-10 15:49:56.146002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.990 [2024-07-10 15:49:56.146036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.990 [2024-07-10 15:49:56.146055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.990 [2024-07-10 15:49:56.157538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.990 [2024-07-10 15:49:56.157565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.990 [2024-07-10 15:49:56.157580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.990 [2024-07-10 15:49:56.175325] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.990 [2024-07-10 15:49:56.175358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.990 [2024-07-10 15:49:56.175377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.990 [2024-07-10 15:49:56.191537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.990 [2024-07-10 15:49:56.191565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.990 [2024-07-10 15:49:56.191581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.990 [2024-07-10 15:49:56.207193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.990 [2024-07-10 15:49:56.207227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.990 [2024-07-10 15:49:56.207245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.990 [2024-07-10 15:49:56.224331] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.990 [2024-07-10 15:49:56.224364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.990 [2024-07-10 15:49:56.224383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.990 [2024-07-10 15:49:56.241978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e93f00) 00:26:16.990 [2024-07-10 15:49:56.242012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:10258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:16.990 [2024-07-10 15:49:56.242030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:16.990 00:26:16.990 Latency(us) 00:26:16.990 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.990 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:16.990 nvme0n1 : 2.01 15565.33 60.80 0.00 0.00 8208.29 2366.58 25826.04 00:26:16.990 =================================================================================================================== 00:26:16.990 Total : 15565.33 60.80 0.00 0.00 8208.29 2366.58 25826.04 00:26:16.990 0 00:26:16.990 15:49:56 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:16.990 15:49:56 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:16.990 15:49:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:16.990 15:49:56 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:16.990 | .driver_specific 00:26:16.990 | .nvme_error 00:26:16.990 | .status_code 00:26:16.990 | .command_transient_transport_error' 00:26:17.247 15:49:56 -- host/digest.sh@71 -- # (( 122 > 0 )) 00:26:17.247 15:49:56 -- host/digest.sh@73 -- # killprocess 2225728 00:26:17.247 15:49:56 -- common/autotest_common.sh@926 -- # '[' -z 2225728 ']' 00:26:17.247 15:49:56 -- common/autotest_common.sh@930 -- # kill -0 2225728 00:26:17.248 15:49:56 -- common/autotest_common.sh@931 -- # uname 00:26:17.248 15:49:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:17.248 15:49:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2225728 00:26:17.248 15:49:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:17.248 15:49:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:17.248 15:49:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2225728' 00:26:17.248 killing process with pid 2225728 00:26:17.248 15:49:56 -- common/autotest_common.sh@945 -- # kill 2225728 00:26:17.248 Received shutdown signal, test time was about 2.000000 seconds 00:26:17.248 00:26:17.248 Latency(us) 00:26:17.248 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.248 =================================================================================================================== 00:26:17.248 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:17.248 15:49:56 -- common/autotest_common.sh@950 -- # wait 2225728 00:26:17.506 15:49:56 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:26:17.506 15:49:56 -- host/digest.sh@54 -- # local rw bs qd 00:26:17.506 15:49:56 -- host/digest.sh@56 -- # rw=randread 00:26:17.506 15:49:56 -- host/digest.sh@56 -- # bs=131072 00:26:17.506 15:49:56 -- host/digest.sh@56 -- # qd=16 00:26:17.506 15:49:56 -- host/digest.sh@58 -- # bperfpid=2226279 00:26:17.506 15:49:56 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:17.506 15:49:56 -- host/digest.sh@60 -- # waitforlisten 2226279 /var/tmp/bperf.sock 00:26:17.506 15:49:56 -- common/autotest_common.sh@819 -- # '[' -z 2226279 ']' 00:26:17.506 15:49:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:17.506 15:49:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:17.506 15:49:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:17.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:17.506 15:49:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:17.506 15:49:56 -- common/autotest_common.sh@10 -- # set +x 00:26:17.506 [2024-07-10 15:49:56.854668] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:17.506 [2024-07-10 15:49:56.854762] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226279 ] 00:26:17.506 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:17.506 Zero copy mechanism will not be used. 00:26:17.506 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.764 [2024-07-10 15:49:56.916894] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.764 [2024-07-10 15:49:57.028835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:18.695 15:49:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:18.695 15:49:57 -- common/autotest_common.sh@852 -- # return 0 00:26:18.695 15:49:57 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:18.695 15:49:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:18.953 15:49:58 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:18.953 15:49:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:18.953 15:49:58 -- common/autotest_common.sh@10 -- # set +x 00:26:18.953 15:49:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:18.953 15:49:58 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:18.953 15:49:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.211 nvme0n1 00:26:19.211 15:49:58 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:19.211 15:49:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:19.211 15:49:58 -- common/autotest_common.sh@10 -- # set +x 00:26:19.211 15:49:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:19.211 15:49:58 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:19.211 15:49:58 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:19.211 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:19.211 Zero copy mechanism will not be used. 00:26:19.211 Running I/O for 2 seconds... 00:26:19.469 [2024-07-10 15:49:58.598934] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.469 [2024-07-10 15:49:58.598988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.599010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.609213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.609248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.609268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.619448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.619494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.619510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.629669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.629720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.629740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.640017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.640050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.640069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.650284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.650317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.650337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.660558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.660602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.660618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.670753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.670785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.670804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.681179] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.681212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.681230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.691557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.691593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.691610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.702041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.702072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.702090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.712506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.712535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.712551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.722996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.723029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.723047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.733617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.733646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.733662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.744034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.744066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.744091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.754519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.754548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.754564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.764799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.764832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.764851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.775098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.775130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.775149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.785438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.785483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.785499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.795539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.795567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.795583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.805672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.805715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.805734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.816140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.816173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.816192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.826346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.826379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.826397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.470 [2024-07-10 15:49:58.836478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.470 [2024-07-10 15:49:58.836510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.470 [2024-07-10 15:49:58.836529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:58.846565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:58.846596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:58.846613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:58.857387] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:58.857423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:58.857453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:58.867592] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:58.867622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:58.867639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:58.878110] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:58.878142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:58.878161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:58.888577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:58.888606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:58.888622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:58.898014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:58.898047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:58.898065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:58.908658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:58.908687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:58.908704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:58.918402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:58.918443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:58.918482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:58.928968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:58.929000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:58.929019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:58.939579] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:58.939608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:58.939624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:58.949982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:58.950015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:58.950034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:58.960734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:58.960780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:58.960798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:58.971360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:58.971392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:58.971411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:58.981557] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:58.981601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:58.981617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:58.991731] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:58.991776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:58.991794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:59.002582] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:59.002611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:59.002627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:59.012660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:59.012695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:59.012711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:59.023165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:59.023198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:59.023216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:59.033827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:59.033860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:59.033879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:59.044482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:59.044519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:59.044535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:59.055041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:59.055074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:59.055092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:59.065589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:59.065616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:59.065631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:59.076172] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:59.076205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:59.076223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:59.087044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:59.087076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:59.087095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.729 [2024-07-10 15:49:59.097278] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.729 [2024-07-10 15:49:59.097309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.729 [2024-07-10 15:49:59.097327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.987 [2024-07-10 15:49:59.107566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.987 [2024-07-10 15:49:59.107599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.987 [2024-07-10 15:49:59.107617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.117645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.117675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.117692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.127846] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.127880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.127899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.138903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.138936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.138955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.149243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.149277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.149296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.159660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.159715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.159734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.170349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.170382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.170401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.180629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.180672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.180687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.191712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.191757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.191782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.202193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.202225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.202244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.213302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.213335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.213353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.224675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.224705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.224739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.235629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.235658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.235674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.245269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.245302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.245320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.256368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.256402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.256421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.266739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.266786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.266805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.277753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.277785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.277804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.288000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.288033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.288051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.298813] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.298846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.298865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.309489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.309518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.309549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.319863] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.319895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.319914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.330763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.330796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.330814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.341684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.341726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.341742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:19.988 [2024-07-10 15:49:59.352711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:19.988 [2024-07-10 15:49:59.352740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:19.988 [2024-07-10 15:49:59.352773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.364218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.364254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.247 [2024-07-10 15:49:59.364274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.374533] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.374579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.247 [2024-07-10 15:49:59.374602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.384847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.384879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.247 [2024-07-10 15:49:59.384899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.395269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.395302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.247 [2024-07-10 15:49:59.395321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.405756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.405789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.247 [2024-07-10 15:49:59.405808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.416246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.416279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.247 [2024-07-10 15:49:59.416298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.426453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.426482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.247 [2024-07-10 15:49:59.426498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.437034] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.437066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.247 [2024-07-10 15:49:59.437085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.447498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.447540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.247 [2024-07-10 15:49:59.447557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.458134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.458167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.247 [2024-07-10 15:49:59.458185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.468627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.468675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.247 [2024-07-10 15:49:59.468691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.479197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.479229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.247 [2024-07-10 15:49:59.479247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.489651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.489679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.247 [2024-07-10 15:49:59.489695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.499890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.499922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.247 [2024-07-10 15:49:59.499941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.510186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.510218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.247 [2024-07-10 15:49:59.510235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.520580] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.520623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.247 [2024-07-10 15:49:59.520639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.531804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.531837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.247 [2024-07-10 15:49:59.531856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.542384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.542417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.247 [2024-07-10 15:49:59.542460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.552853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.552885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.247 [2024-07-10 15:49:59.552903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.563296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.563328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.247 [2024-07-10 15:49:59.563346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.247 [2024-07-10 15:49:59.573567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.247 [2024-07-10 15:49:59.573595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.248 [2024-07-10 15:49:59.573626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.248 [2024-07-10 15:49:59.584003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.248 [2024-07-10 15:49:59.584034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.248 [2024-07-10 15:49:59.584052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.248 [2024-07-10 15:49:59.594468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.248 [2024-07-10 15:49:59.594497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.248 [2024-07-10 15:49:59.594513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.248 [2024-07-10 15:49:59.605525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.248 [2024-07-10 15:49:59.605555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.248 [2024-07-10 15:49:59.605573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.248 [2024-07-10 15:49:59.615802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.248 [2024-07-10 15:49:59.615835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.248 [2024-07-10 15:49:59.615853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.626012] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.626048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.626068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.636321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.636355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.636374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.646620] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.646670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.646687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.656801] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.656834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.656853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.667269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.667302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.667320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.677664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.677692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.677724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.688683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.688742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.688761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.700003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.700037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.700056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.711258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.711294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.711314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.722136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.722171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.722190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.732733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.732766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.732786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.742918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.742962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.742981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.753498] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.753528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.753545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.763779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.763819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.763837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.774321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.774354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.774373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.784626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.784655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.784671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.794946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.794979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.794998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.805537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.805580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.805597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.815933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.815965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.815984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.826683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.826713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.826754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.837350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.837382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.837400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.847726] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.847759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.847779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.858041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.858073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.858091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.868572] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.868601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.868617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.507 [2024-07-10 15:49:59.878877] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.507 [2024-07-10 15:49:59.878913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.507 [2024-07-10 15:49:59.878932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.766 [2024-07-10 15:49:59.889269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.766 [2024-07-10 15:49:59.889305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.766 [2024-07-10 15:49:59.889324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.766 [2024-07-10 15:49:59.899682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.766 [2024-07-10 15:49:59.899730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.766 [2024-07-10 15:49:59.899748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.766 [2024-07-10 15:49:59.910124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.766 [2024-07-10 15:49:59.910157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.766 [2024-07-10 15:49:59.910175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.766 [2024-07-10 15:49:59.920642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:49:59.920675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:49:59.920693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:49:59.931660] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:49:59.931703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:49:59.931718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:49:59.942153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:49:59.942185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:49:59.942204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:49:59.952711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:49:59.952738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:49:59.952754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:49:59.963383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:49:59.963415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:49:59.963445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:49:59.973679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:49:59.973708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:49:59.973723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:49:59.984231] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:49:59.984264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:49:59.984282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:49:59.994628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:49:59.994656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:49:59.994672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:50:00.005485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:50:00.005553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:50:00.005591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:50:00.015957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:50:00.015995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:50:00.016015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:50:00.026366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:50:00.026401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:50:00.026420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:50:00.036640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:50:00.036673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:50:00.036689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:50:00.047645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:50:00.047680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:50:00.047697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:50:00.058166] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:50:00.058197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:50:00.058214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:50:00.068437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:50:00.068492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:50:00.068509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:50:00.078800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:50:00.078833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:50:00.078852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:50:00.089350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:50:00.089383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:50:00.089401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:50:00.099654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:50:00.099690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:50:00.099723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:50:00.109977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:50:00.110009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:50:00.110028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:50:00.120207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:50:00.120239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:50:00.120257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:50:00.131376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:50:00.131420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:50:00.131444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:20.767 [2024-07-10 15:50:00.141776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:20.767 [2024-07-10 15:50:00.141808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:20.767 [2024-07-10 15:50:00.141841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.026 [2024-07-10 15:50:00.152218] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.026 [2024-07-10 15:50:00.152253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.026 [2024-07-10 15:50:00.152273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.026 [2024-07-10 15:50:00.162635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.026 [2024-07-10 15:50:00.162665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.026 [2024-07-10 15:50:00.162682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.026 [2024-07-10 15:50:00.173171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.026 [2024-07-10 15:50:00.173204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.026 [2024-07-10 15:50:00.173223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.183783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.183816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.183835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.194479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.194509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.194527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.205506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.205551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.205567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.216065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.216097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.216116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.226460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.226503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.226518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.236779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.236810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.236828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.247113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.247145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.247163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.257535] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.257563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.257578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.267983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.268015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.268033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.278392] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.278423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.278457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.288883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.288916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.288934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.299339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.299370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.299389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.309764] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.309797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.309816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.320487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.320530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.320546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.330831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.330863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.330881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.341251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.341283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.341301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.351652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.351680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.351696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.362249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.362281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.362300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.372673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.372727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.372746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.382997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.383029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.383047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.027 [2024-07-10 15:50:00.393315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.027 [2024-07-10 15:50:00.393346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.027 [2024-07-10 15:50:00.393365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.311 [2024-07-10 15:50:00.403812] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.312 [2024-07-10 15:50:00.403848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.312 [2024-07-10 15:50:00.403868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.312 [2024-07-10 15:50:00.414407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.312 [2024-07-10 15:50:00.414452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.312 [2024-07-10 15:50:00.414489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.312 [2024-07-10 15:50:00.425152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.312 [2024-07-10 15:50:00.425186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.312 [2024-07-10 15:50:00.425205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.312 [2024-07-10 15:50:00.435356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.312 [2024-07-10 15:50:00.435389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.312 [2024-07-10 15:50:00.435408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.312 [2024-07-10 15:50:00.445962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.312 [2024-07-10 15:50:00.445994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.312 [2024-07-10 15:50:00.446013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.312 [2024-07-10 15:50:00.456266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.312 [2024-07-10 15:50:00.456300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.312 [2024-07-10 15:50:00.456318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.312 [2024-07-10 15:50:00.466388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.312 [2024-07-10 15:50:00.466421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.312 [2024-07-10 15:50:00.466449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.312 [2024-07-10 15:50:00.476772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.312 [2024-07-10 15:50:00.476804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.312 [2024-07-10 15:50:00.476823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.312 [2024-07-10 15:50:00.487623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.312 [2024-07-10 15:50:00.487652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.312 [2024-07-10 15:50:00.487669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.312 [2024-07-10 15:50:00.498663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.312 [2024-07-10 15:50:00.498691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.312 [2024-07-10 15:50:00.498722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.312 [2024-07-10 15:50:00.509294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.312 [2024-07-10 15:50:00.509327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.312 [2024-07-10 15:50:00.509346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.312 [2024-07-10 15:50:00.519770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.312 [2024-07-10 15:50:00.519803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.312 [2024-07-10 15:50:00.519821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.312 [2024-07-10 15:50:00.530151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.312 [2024-07-10 15:50:00.530184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.312 [2024-07-10 15:50:00.530202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.312 [2024-07-10 15:50:00.540550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.312 [2024-07-10 15:50:00.540577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.312 [2024-07-10 15:50:00.540593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.312 [2024-07-10 15:50:00.550885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.312 [2024-07-10 15:50:00.550923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.312 [2024-07-10 15:50:00.550942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.312 [2024-07-10 15:50:00.561366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.312 [2024-07-10 15:50:00.561399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.312 [2024-07-10 15:50:00.561418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.312 [2024-07-10 15:50:00.571831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.312 [2024-07-10 15:50:00.571864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.312 [2024-07-10 15:50:00.571882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.312 [2024-07-10 15:50:00.582285] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x13d7880) 00:26:21.312 [2024-07-10 15:50:00.582316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.312 [2024-07-10 15:50:00.582335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.312 00:26:21.312 Latency(us) 00:26:21.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.312 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:21.312 nvme0n1 : 2.00 2948.50 368.56 0.00 0.00 5422.49 4320.52 14272.28 00:26:21.312 =================================================================================================================== 00:26:21.312 Total : 2948.50 368.56 0.00 0.00 5422.49 4320.52 14272.28 00:26:21.312 0 00:26:21.312 15:50:00 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:21.312 15:50:00 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:21.312 15:50:00 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:21.312 | .driver_specific 00:26:21.312 | .nvme_error 00:26:21.312 | .status_code 00:26:21.312 | .command_transient_transport_error' 00:26:21.312 15:50:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:21.592 15:50:00 -- host/digest.sh@71 -- # (( 190 > 0 )) 00:26:21.592 15:50:00 -- host/digest.sh@73 -- # killprocess 2226279 00:26:21.592 15:50:00 -- common/autotest_common.sh@926 -- # '[' -z 2226279 ']' 00:26:21.592 15:50:00 -- common/autotest_common.sh@930 -- # kill -0 2226279 00:26:21.592 15:50:00 -- common/autotest_common.sh@931 -- # uname 00:26:21.592 15:50:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:21.592 15:50:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2226279 00:26:21.592 15:50:00 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:21.592 15:50:00 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:21.592 15:50:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2226279' 00:26:21.592 killing process with pid 2226279 00:26:21.592 15:50:00 -- common/autotest_common.sh@945 -- # kill 2226279 00:26:21.592 Received shutdown signal, test time was about 2.000000 seconds 00:26:21.592 00:26:21.592 Latency(us) 00:26:21.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.592 =================================================================================================================== 00:26:21.592 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:21.592 15:50:00 -- common/autotest_common.sh@950 -- # wait 2226279 00:26:21.850 15:50:01 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:26:21.850 15:50:01 -- host/digest.sh@54 -- # local rw bs qd 00:26:21.850 15:50:01 -- host/digest.sh@56 -- # rw=randwrite 00:26:21.850 15:50:01 -- host/digest.sh@56 -- # bs=4096 00:26:21.850 15:50:01 -- host/digest.sh@56 -- # qd=128 00:26:21.850 15:50:01 -- host/digest.sh@58 -- # bperfpid=2226844 00:26:21.850 15:50:01 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:21.850 15:50:01 -- host/digest.sh@60 -- # waitforlisten 2226844 /var/tmp/bperf.sock 00:26:21.850 15:50:01 -- common/autotest_common.sh@819 -- # '[' -z 2226844 ']' 00:26:21.850 15:50:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:21.850 15:50:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:21.850 15:50:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:21.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:21.850 15:50:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:21.850 15:50:01 -- common/autotest_common.sh@10 -- # set +x 00:26:21.850 [2024-07-10 15:50:01.173695] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:21.850 [2024-07-10 15:50:01.173813] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2226844 ] 00:26:21.850 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.109 [2024-07-10 15:50:01.238803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.109 [2024-07-10 15:50:01.344912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:23.041 15:50:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:23.041 15:50:02 -- common/autotest_common.sh@852 -- # return 0 00:26:23.041 15:50:02 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:23.041 15:50:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:23.041 15:50:02 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:23.041 15:50:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:23.041 15:50:02 -- common/autotest_common.sh@10 -- # set +x 00:26:23.041 15:50:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:23.041 15:50:02 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:23.041 15:50:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:23.607 nvme0n1 00:26:23.607 15:50:02 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:23.607 15:50:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:23.607 15:50:02 -- common/autotest_common.sh@10 -- # set +x 00:26:23.607 15:50:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:23.607 15:50:02 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:23.607 15:50:02 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:23.607 Running I/O for 2 seconds... 00:26:23.607 [2024-07-10 15:50:02.952498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190ee5c8 00:26:23.607 [2024-07-10 15:50:02.953569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.607 [2024-07-10 15:50:02.953622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:23.607 [2024-07-10 15:50:02.965297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e8d30 00:26:23.607 [2024-07-10 15:50:02.966362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.607 [2024-07-10 15:50:02.966403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:23.607 [2024-07-10 15:50:02.978123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e1710 00:26:23.607 [2024-07-10 15:50:02.979237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:10528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.607 [2024-07-10 15:50:02.979271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.865 [2024-07-10 15:50:02.991451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e1f80 00:26:23.865 [2024-07-10 15:50:02.992617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.865 [2024-07-10 15:50:02.992659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:23.865 [2024-07-10 15:50:03.004174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e7c50 00:26:23.865 [2024-07-10 15:50:03.005371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.865 [2024-07-10 15:50:03.005404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:23.865 [2024-07-10 15:50:03.016855] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e7c50 00:26:23.865 [2024-07-10 15:50:03.018035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.865 [2024-07-10 15:50:03.018068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:23.865 [2024-07-10 15:50:03.029501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e7c50 00:26:23.866 [2024-07-10 15:50:03.030693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.866 [2024-07-10 15:50:03.030737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:23.866 [2024-07-10 15:50:03.042075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e7c50 00:26:23.866 [2024-07-10 15:50:03.043316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.866 [2024-07-10 15:50:03.043348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:23.866 [2024-07-10 15:50:03.054729] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e7c50 00:26:23.866 [2024-07-10 15:50:03.055975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.866 [2024-07-10 15:50:03.056006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:23.866 [2024-07-10 15:50:03.068185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e95a0 00:26:23.866 [2024-07-10 15:50:03.069533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.866 [2024-07-10 15:50:03.069563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:23.866 [2024-07-10 15:50:03.081165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:23.866 [2024-07-10 15:50:03.082117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.866 [2024-07-10 15:50:03.082161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:23.866 [2024-07-10 15:50:03.094209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:23.866 [2024-07-10 15:50:03.095416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.866 [2024-07-10 15:50:03.095458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:23.866 [2024-07-10 15:50:03.107011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:23.866 [2024-07-10 15:50:03.108224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.866 [2024-07-10 15:50:03.108256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:23.866 [2024-07-10 15:50:03.119765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:23.866 [2024-07-10 15:50:03.121004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.866 [2024-07-10 15:50:03.121035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:23.866 [2024-07-10 15:50:03.132582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:23.866 [2024-07-10 15:50:03.133795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.866 [2024-07-10 15:50:03.133824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:23.866 [2024-07-10 15:50:03.145335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:23.866 [2024-07-10 15:50:03.146562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.866 [2024-07-10 15:50:03.146590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:23.866 [2024-07-10 15:50:03.158099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:23.866 [2024-07-10 15:50:03.159295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.866 [2024-07-10 15:50:03.159327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:23.866 [2024-07-10 15:50:03.170867] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:23.866 [2024-07-10 15:50:03.172106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.866 [2024-07-10 15:50:03.172138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:23.866 [2024-07-10 15:50:03.183655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:23.866 [2024-07-10 15:50:03.184968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.866 [2024-07-10 15:50:03.185000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:23.866 [2024-07-10 15:50:03.196544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:23.866 [2024-07-10 15:50:03.197821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.866 [2024-07-10 15:50:03.197852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:23.866 [2024-07-10 15:50:03.209334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:23.866 [2024-07-10 15:50:03.210662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:2814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.866 [2024-07-10 15:50:03.210694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:23.866 [2024-07-10 15:50:03.222019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:23.866 [2024-07-10 15:50:03.223377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.866 [2024-07-10 15:50:03.223409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:23.866 [2024-07-10 15:50:03.234810] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:23.866 [2024-07-10 15:50:03.236152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:7310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:23.866 [2024-07-10 15:50:03.236184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.124 [2024-07-10 15:50:03.248273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:24.124 [2024-07-10 15:50:03.249640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.124 [2024-07-10 15:50:03.249670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.124 [2024-07-10 15:50:03.261070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:24.124 [2024-07-10 15:50:03.262508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.124 [2024-07-10 15:50:03.262536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.124 [2024-07-10 15:50:03.273830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:24.124 [2024-07-10 15:50:03.275231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.124 [2024-07-10 15:50:03.275263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.124 [2024-07-10 15:50:03.286641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:24.124 [2024-07-10 15:50:03.288057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.124 [2024-07-10 15:50:03.288088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.124 [2024-07-10 15:50:03.299418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:24.124 [2024-07-10 15:50:03.300815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.124 [2024-07-10 15:50:03.300855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.124 [2024-07-10 15:50:03.312182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e1b48 00:26:24.124 [2024-07-10 15:50:03.313614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.124 [2024-07-10 15:50:03.313643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.124 [2024-07-10 15:50:03.325091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f46d0 00:26:24.124 [2024-07-10 15:50:03.326513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.124 [2024-07-10 15:50:03.326541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.124 [2024-07-10 15:50:03.337783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5220 00:26:24.124 [2024-07-10 15:50:03.339219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.124 [2024-07-10 15:50:03.339250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.124 [2024-07-10 15:50:03.350514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190dfdc0 00:26:24.124 [2024-07-10 15:50:03.351913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:3190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.124 [2024-07-10 15:50:03.351944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.124 [2024-07-10 15:50:03.363158] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190ef270 00:26:24.124 [2024-07-10 15:50:03.364635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.124 [2024-07-10 15:50:03.364663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.124 [2024-07-10 15:50:03.375125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f2d80 00:26:24.124 [2024-07-10 15:50:03.375619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.124 [2024-07-10 15:50:03.375645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.124 [2024-07-10 15:50:03.388237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5220 00:26:24.124 [2024-07-10 15:50:03.389377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.124 [2024-07-10 15:50:03.389409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.124 [2024-07-10 15:50:03.401047] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5220 00:26:24.124 [2024-07-10 15:50:03.402184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.124 [2024-07-10 15:50:03.402215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.124 [2024-07-10 15:50:03.413806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5220 00:26:24.124 [2024-07-10 15:50:03.414988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.124 [2024-07-10 15:50:03.415020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.124 [2024-07-10 15:50:03.426653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5220 00:26:24.124 [2024-07-10 15:50:03.427806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:13499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.124 [2024-07-10 15:50:03.427837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.124 [2024-07-10 15:50:03.439452] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5220 00:26:24.124 [2024-07-10 15:50:03.440633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2652 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.124 [2024-07-10 15:50:03.440661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.124 [2024-07-10 15:50:03.452252] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5220 00:26:24.124 [2024-07-10 15:50:03.453520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.125 [2024-07-10 15:50:03.453548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:24.125 [2024-07-10 15:50:03.465128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5220 00:26:24.125 [2024-07-10 15:50:03.466356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.125 [2024-07-10 15:50:03.466388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:24.125 [2024-07-10 15:50:03.477873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5220 00:26:24.125 [2024-07-10 15:50:03.479135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.125 [2024-07-10 15:50:03.479166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:24.125 [2024-07-10 15:50:03.490684] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5220 00:26:24.125 [2024-07-10 15:50:03.491974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.125 [2024-07-10 15:50:03.492005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.504045] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5220 00:26:24.383 [2024-07-10 15:50:03.505310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.383 [2024-07-10 15:50:03.505344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.516924] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5220 00:26:24.383 [2024-07-10 15:50:03.518239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.383 [2024-07-10 15:50:03.518272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.529839] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5220 00:26:24.383 [2024-07-10 15:50:03.531158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.383 [2024-07-10 15:50:03.531190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.542745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5220 00:26:24.383 [2024-07-10 15:50:03.544072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.383 [2024-07-10 15:50:03.544103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.555535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5220 00:26:24.383 [2024-07-10 15:50:03.556840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.383 [2024-07-10 15:50:03.556872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.568268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5220 00:26:24.383 [2024-07-10 15:50:03.569599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:14520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.383 [2024-07-10 15:50:03.569627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.581100] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5220 00:26:24.383 [2024-07-10 15:50:03.582444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:11323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.383 [2024-07-10 15:50:03.582500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.593895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5220 00:26:24.383 [2024-07-10 15:50:03.595270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.383 [2024-07-10 15:50:03.595301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.606677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5220 00:26:24.383 [2024-07-10 15:50:03.608107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.383 [2024-07-10 15:50:03.608138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.619588] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e7c50 00:26:24.383 [2024-07-10 15:50:03.620960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.383 [2024-07-10 15:50:03.620991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.632259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e4578 00:26:24.383 [2024-07-10 15:50:03.633667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.383 [2024-07-10 15:50:03.633699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.645104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190ef6a8 00:26:24.383 [2024-07-10 15:50:03.646520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.383 [2024-07-10 15:50:03.646548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.657841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e0a68 00:26:24.383 [2024-07-10 15:50:03.659260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:18988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.383 [2024-07-10 15:50:03.659291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.670553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190ef270 00:26:24.383 [2024-07-10 15:50:03.671970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.383 [2024-07-10 15:50:03.672001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.683209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e4140 00:26:24.383 [2024-07-10 15:50:03.684621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.383 [2024-07-10 15:50:03.684649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.695955] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f1430 00:26:24.383 [2024-07-10 15:50:03.697409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.383 [2024-07-10 15:50:03.697447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.707214] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190eb328 00:26:24.383 [2024-07-10 15:50:03.708248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6274 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.383 [2024-07-10 15:50:03.708279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.719914] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f6020 00:26:24.383 [2024-07-10 15:50:03.720908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:17440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.383 [2024-07-10 15:50:03.720937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.732488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190ee190 00:26:24.383 [2024-07-10 15:50:03.733585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:12590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.383 [2024-07-10 15:50:03.733613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.745116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190eaab8 00:26:24.383 [2024-07-10 15:50:03.746263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.383 [2024-07-10 15:50:03.746294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.383 [2024-07-10 15:50:03.758049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190eaab8 00:26:24.642 [2024-07-10 15:50:03.759328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:14836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.642 [2024-07-10 15:50:03.759363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.642 [2024-07-10 15:50:03.770974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190eaab8 00:26:24.642 [2024-07-10 15:50:03.772198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.642 [2024-07-10 15:50:03.772232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.642 [2024-07-10 15:50:03.783670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190eaab8 00:26:24.642 [2024-07-10 15:50:03.784899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.642 [2024-07-10 15:50:03.784931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.642 [2024-07-10 15:50:03.796180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190eaab8 00:26:24.642 [2024-07-10 15:50:03.797417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.642 [2024-07-10 15:50:03.797459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.642 [2024-07-10 15:50:03.808949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190eaab8 00:26:24.642 [2024-07-10 15:50:03.810282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.642 [2024-07-10 15:50:03.810316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.642 [2024-07-10 15:50:03.821910] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e0630 00:26:24.642 [2024-07-10 15:50:03.823069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.642 [2024-07-10 15:50:03.823102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.642 [2024-07-10 15:50:03.833963] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190fb8b8 00:26:24.642 [2024-07-10 15:50:03.834369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.642 [2024-07-10 15:50:03.834414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:24.642 [2024-07-10 15:50:03.847245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e23b8 00:26:24.642 [2024-07-10 15:50:03.848450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:22587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.642 [2024-07-10 15:50:03.848497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.642 [2024-07-10 15:50:03.859816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f46d0 00:26:24.642 [2024-07-10 15:50:03.861085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.642 [2024-07-10 15:50:03.861117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:24.642 [2024-07-10 15:50:03.872511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190ed4e8 00:26:24.642 [2024-07-10 15:50:03.873721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.642 [2024-07-10 15:50:03.873747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:24.642 [2024-07-10 15:50:03.885197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e6738 00:26:24.642 [2024-07-10 15:50:03.886384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:20016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.642 [2024-07-10 15:50:03.886415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:24.642 [2024-07-10 15:50:03.897780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e4de8 00:26:24.642 [2024-07-10 15:50:03.899062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.642 [2024-07-10 15:50:03.899093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:24.642 [2024-07-10 15:50:03.910327] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e12d8 00:26:24.642 [2024-07-10 15:50:03.911563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:1996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.642 [2024-07-10 15:50:03.911591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:24.642 [2024-07-10 15:50:03.923179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190fb8b8 00:26:24.642 [2024-07-10 15:50:03.924210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.642 [2024-07-10 15:50:03.924241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:24.642 [2024-07-10 15:50:03.936283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f2510 00:26:24.643 [2024-07-10 15:50:03.937586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:14609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.643 [2024-07-10 15:50:03.937615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:24.643 [2024-07-10 15:50:03.948925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f2510 00:26:24.643 [2024-07-10 15:50:03.950246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:8488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.643 [2024-07-10 15:50:03.950278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:24.643 [2024-07-10 15:50:03.961711] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f2510 00:26:24.643 [2024-07-10 15:50:03.963055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.643 [2024-07-10 15:50:03.963093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:24.643 [2024-07-10 15:50:03.974312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f2510 00:26:24.643 [2024-07-10 15:50:03.975654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.643 [2024-07-10 15:50:03.975683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:24.643 [2024-07-10 15:50:03.986134] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190edd58 00:26:24.643 [2024-07-10 15:50:03.986588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:13243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.643 [2024-07-10 15:50:03.986617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:24.643 [2024-07-10 15:50:03.999687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190edd58 00:26:24.643 [2024-07-10 15:50:04.000831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.643 [2024-07-10 15:50:04.000862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:24.643 [2024-07-10 15:50:04.012477] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190edd58 00:26:24.643 [2024-07-10 15:50:04.013645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.643 [2024-07-10 15:50:04.013676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:24.901 [2024-07-10 15:50:04.025725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190edd58 00:26:24.901 [2024-07-10 15:50:04.026999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.901 [2024-07-10 15:50:04.027034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:24.901 [2024-07-10 15:50:04.038608] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190edd58 00:26:24.901 [2024-07-10 15:50:04.039816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:9233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.901 [2024-07-10 15:50:04.039849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:24.901 [2024-07-10 15:50:04.051373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190edd58 00:26:24.901 [2024-07-10 15:50:04.052634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.901 [2024-07-10 15:50:04.052662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:24.901 [2024-07-10 15:50:04.064086] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190edd58 00:26:24.901 [2024-07-10 15:50:04.065305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.901 [2024-07-10 15:50:04.065336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:24.901 [2024-07-10 15:50:04.076851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190edd58 00:26:24.901 [2024-07-10 15:50:04.078171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.901 [2024-07-10 15:50:04.078203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:24.901 [2024-07-10 15:50:04.089859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190edd58 00:26:24.902 [2024-07-10 15:50:04.091134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.902 [2024-07-10 15:50:04.091165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:24.902 [2024-07-10 15:50:04.102611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190edd58 00:26:24.902 [2024-07-10 15:50:04.103873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.902 [2024-07-10 15:50:04.103905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:24.902 [2024-07-10 15:50:04.115338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190edd58 00:26:24.902 [2024-07-10 15:50:04.116649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.902 [2024-07-10 15:50:04.116692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:24.902 [2024-07-10 15:50:04.127990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190edd58 00:26:24.902 [2024-07-10 15:50:04.129318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.902 [2024-07-10 15:50:04.129349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:24.902 [2024-07-10 15:50:04.140842] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190edd58 00:26:24.902 [2024-07-10 15:50:04.142204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.902 [2024-07-10 15:50:04.142235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:24.902 [2024-07-10 15:50:04.153628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190efae0 00:26:24.902 [2024-07-10 15:50:04.154937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:17053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.902 [2024-07-10 15:50:04.154968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:24.902 [2024-07-10 15:50:04.166262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190ef6a8 00:26:24.902 [2024-07-10 15:50:04.167615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.902 [2024-07-10 15:50:04.167641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:24.902 [2024-07-10 15:50:04.178885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e1f80 00:26:24.902 [2024-07-10 15:50:04.180239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.902 [2024-07-10 15:50:04.180271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:24.902 [2024-07-10 15:50:04.191590] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e7818 00:26:24.902 [2024-07-10 15:50:04.192962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.902 [2024-07-10 15:50:04.192995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:24.902 [2024-07-10 15:50:04.204320] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190ed920 00:26:24.902 [2024-07-10 15:50:04.205645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:16598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.902 [2024-07-10 15:50:04.205671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:24.902 [2024-07-10 15:50:04.216963] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f5be8 00:26:24.902 [2024-07-10 15:50:04.218192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.902 [2024-07-10 15:50:04.218224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:24.902 [2024-07-10 15:50:04.229813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f5be8 00:26:24.902 [2024-07-10 15:50:04.231055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.902 [2024-07-10 15:50:04.231087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:24.902 [2024-07-10 15:50:04.242553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f5be8 00:26:24.902 [2024-07-10 15:50:04.243848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.902 [2024-07-10 15:50:04.243879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:24.902 [2024-07-10 15:50:04.255479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f5be8 00:26:24.902 [2024-07-10 15:50:04.256701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:18845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.902 [2024-07-10 15:50:04.256747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:24.902 [2024-07-10 15:50:04.268305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f5be8 00:26:24.902 [2024-07-10 15:50:04.269516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.902 [2024-07-10 15:50:04.269545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.281664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f5be8 00:26:25.162 [2024-07-10 15:50:04.282889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24505 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.282924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.294423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f1868 00:26:25.162 [2024-07-10 15:50:04.296141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.296180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.307349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190ef6a8 00:26:25.162 [2024-07-10 15:50:04.308286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:1857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.308319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.320287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e1b48 00:26:25.162 [2024-07-10 15:50:04.322043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.322075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.333152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e4de8 00:26:25.162 [2024-07-10 15:50:04.334610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.334641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.345830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190fc128 00:26:25.162 [2024-07-10 15:50:04.347279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:20632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.347310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.358640] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e8d30 00:26:25.162 [2024-07-10 15:50:04.360035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.360067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.371340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190ee190 00:26:25.162 [2024-07-10 15:50:04.372846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:11393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.372878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.384061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190fb048 00:26:25.162 [2024-07-10 15:50:04.385586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.385612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.396814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190fc998 00:26:25.162 [2024-07-10 15:50:04.398281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.398313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.409554] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190fc560 00:26:25.162 [2024-07-10 15:50:04.411021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.411053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.422219] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190fef90 00:26:25.162 [2024-07-10 15:50:04.423730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.423761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.434902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190fef90 00:26:25.162 [2024-07-10 15:50:04.436413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.436451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.447604] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190fc560 00:26:25.162 [2024-07-10 15:50:04.448878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.448909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.460356] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190fc560 00:26:25.162 [2024-07-10 15:50:04.461691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:2766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.461733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.473111] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190fc560 00:26:25.162 [2024-07-10 15:50:04.474435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:3736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.474465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.485819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f8618 00:26:25.162 [2024-07-10 15:50:04.487254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:25333 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.487286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.498498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e01f8 00:26:25.162 [2024-07-10 15:50:04.499903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:4556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.499935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.511264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e99d8 00:26:25.162 [2024-07-10 15:50:04.512922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.512953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.523931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e99d8 00:26:25.162 [2024-07-10 15:50:04.525915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:8962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.525947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:25.162 [2024-07-10 15:50:04.535807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f0788 00:26:25.162 [2024-07-10 15:50:04.537022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.162 [2024-07-10 15:50:04.537056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:25.421 [2024-07-10 15:50:04.548878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f0bc0 00:26:25.421 [2024-07-10 15:50:04.550076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.421 [2024-07-10 15:50:04.550110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:25.421 [2024-07-10 15:50:04.561448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190dfdc0 00:26:25.421 [2024-07-10 15:50:04.562643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.421 [2024-07-10 15:50:04.562675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:25.421 [2024-07-10 15:50:04.574043] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190fbcf0 00:26:25.421 [2024-07-10 15:50:04.575254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:22974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.421 [2024-07-10 15:50:04.575286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.421 [2024-07-10 15:50:04.587021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190fb480 00:26:25.421 [2024-07-10 15:50:04.588144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:25152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.421 [2024-07-10 15:50:04.588176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:25.421 [2024-07-10 15:50:04.600044] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f8e88 00:26:25.421 [2024-07-10 15:50:04.600437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.421 [2024-07-10 15:50:04.600463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:25.421 [2024-07-10 15:50:04.613061] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:25.421 [2024-07-10 15:50:04.614141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.421 [2024-07-10 15:50:04.614172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.421 [2024-07-10 15:50:04.625829] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:25.421 [2024-07-10 15:50:04.626951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.421 [2024-07-10 15:50:04.626987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:25.421 [2024-07-10 15:50:04.638627] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:25.421 [2024-07-10 15:50:04.639798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:11464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.422 [2024-07-10 15:50:04.639829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:25.422 [2024-07-10 15:50:04.651595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:25.422 [2024-07-10 15:50:04.652710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8440 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.422 [2024-07-10 15:50:04.652754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:25.422 [2024-07-10 15:50:04.664487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:25.422 [2024-07-10 15:50:04.665660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:18817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.422 [2024-07-10 15:50:04.665688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:25.422 [2024-07-10 15:50:04.677325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:25.422 [2024-07-10 15:50:04.678527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.422 [2024-07-10 15:50:04.678555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:25.422 [2024-07-10 15:50:04.690156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:25.422 [2024-07-10 15:50:04.691368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:3661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.422 [2024-07-10 15:50:04.691399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:25.422 [2024-07-10 15:50:04.703024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:25.422 [2024-07-10 15:50:04.704245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.422 [2024-07-10 15:50:04.704276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:25.422 [2024-07-10 15:50:04.715930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:25.422 [2024-07-10 15:50:04.717153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:15300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.422 [2024-07-10 15:50:04.717184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:25.422 [2024-07-10 15:50:04.728743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:25.422 [2024-07-10 15:50:04.730032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:15559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.422 [2024-07-10 15:50:04.730063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:25.422 [2024-07-10 15:50:04.741518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:25.422 [2024-07-10 15:50:04.742811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.422 [2024-07-10 15:50:04.742843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:25.422 [2024-07-10 15:50:04.754347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:25.422 [2024-07-10 15:50:04.755672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:74 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.422 [2024-07-10 15:50:04.755699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:25.422 [2024-07-10 15:50:04.767187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:25.422 [2024-07-10 15:50:04.768443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.422 [2024-07-10 15:50:04.768475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:25.422 [2024-07-10 15:50:04.779991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:25.422 [2024-07-10 15:50:04.781298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.422 [2024-07-10 15:50:04.781329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:25.422 [2024-07-10 15:50:04.792770] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:25.422 [2024-07-10 15:50:04.794282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.422 [2024-07-10 15:50:04.794316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:25.680 [2024-07-10 15:50:04.806172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:25.680 [2024-07-10 15:50:04.807569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:3035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.680 [2024-07-10 15:50:04.807598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:25.680 [2024-07-10 15:50:04.819152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:25.680 [2024-07-10 15:50:04.820584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.680 [2024-07-10 15:50:04.820625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:25.680 [2024-07-10 15:50:04.831939] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:25.680 [2024-07-10 15:50:04.833271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.680 [2024-07-10 15:50:04.833303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:25.680 [2024-07-10 15:50:04.844779] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:25.680 [2024-07-10 15:50:04.846166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:18753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.680 [2024-07-10 15:50:04.846198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:25.680 [2024-07-10 15:50:04.857661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e5ec8 00:26:25.680 [2024-07-10 15:50:04.859035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.680 [2024-07-10 15:50:04.859067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:25.680 [2024-07-10 15:50:04.870419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f1430 00:26:25.680 [2024-07-10 15:50:04.871813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.680 [2024-07-10 15:50:04.871844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:25.680 [2024-07-10 15:50:04.883211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190e9168 00:26:25.680 [2024-07-10 15:50:04.884639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.680 [2024-07-10 15:50:04.884665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:25.680 [2024-07-10 15:50:04.895897] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190eaab8 00:26:25.680 [2024-07-10 15:50:04.897289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.680 [2024-07-10 15:50:04.897320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:25.680 [2024-07-10 15:50:04.907817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190fd208 00:26:25.680 [2024-07-10 15:50:04.908190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.680 [2024-07-10 15:50:04.908216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:25.680 [2024-07-10 15:50:04.920940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f1430 00:26:25.680 [2024-07-10 15:50:04.922041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.680 [2024-07-10 15:50:04.922073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:25.680 [2024-07-10 15:50:04.933659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x10631a0) with pdu=0x2000190f1430 00:26:25.680 [2024-07-10 15:50:04.934814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:8283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:25.680 [2024-07-10 15:50:04.934845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:25.680 00:26:25.680 Latency(us) 00:26:25.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.681 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:25.681 nvme0n1 : 2.00 19951.41 77.94 0.00 0.00 6407.28 3301.07 13398.47 00:26:25.681 =================================================================================================================== 00:26:25.681 Total : 19951.41 77.94 0.00 0.00 6407.28 3301.07 13398.47 00:26:25.681 0 00:26:25.681 15:50:04 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:25.681 15:50:04 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:25.681 15:50:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:25.681 15:50:04 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:25.681 | .driver_specific 00:26:25.681 | .nvme_error 00:26:25.681 | .status_code 00:26:25.681 | .command_transient_transport_error' 00:26:25.939 15:50:05 -- host/digest.sh@71 -- # (( 156 > 0 )) 00:26:25.939 15:50:05 -- host/digest.sh@73 -- # killprocess 2226844 00:26:25.939 15:50:05 -- common/autotest_common.sh@926 -- # '[' -z 2226844 ']' 00:26:25.939 15:50:05 -- common/autotest_common.sh@930 -- # kill -0 2226844 00:26:25.939 15:50:05 -- common/autotest_common.sh@931 -- # uname 00:26:25.939 15:50:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:25.939 15:50:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2226844 00:26:25.939 15:50:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:25.939 15:50:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:25.939 15:50:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2226844' 00:26:25.939 killing process with pid 2226844 00:26:25.939 15:50:05 -- common/autotest_common.sh@945 -- # kill 2226844 00:26:25.939 Received shutdown signal, test time was about 2.000000 seconds 00:26:25.939 00:26:25.939 Latency(us) 00:26:25.939 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.939 =================================================================================================================== 00:26:25.939 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:25.939 15:50:05 -- common/autotest_common.sh@950 -- # wait 2226844 00:26:26.197 15:50:05 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:26:26.197 15:50:05 -- host/digest.sh@54 -- # local rw bs qd 00:26:26.197 15:50:05 -- host/digest.sh@56 -- # rw=randwrite 00:26:26.197 15:50:05 -- host/digest.sh@56 -- # bs=131072 00:26:26.197 15:50:05 -- host/digest.sh@56 -- # qd=16 00:26:26.197 15:50:05 -- host/digest.sh@58 -- # bperfpid=2227392 00:26:26.197 15:50:05 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:26.197 15:50:05 -- host/digest.sh@60 -- # waitforlisten 2227392 /var/tmp/bperf.sock 00:26:26.197 15:50:05 -- common/autotest_common.sh@819 -- # '[' -z 2227392 ']' 00:26:26.197 15:50:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:26.197 15:50:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:26.197 15:50:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:26.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:26.197 15:50:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:26.197 15:50:05 -- common/autotest_common.sh@10 -- # set +x 00:26:26.197 [2024-07-10 15:50:05.524862] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:26.197 [2024-07-10 15:50:05.524938] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227392 ] 00:26:26.197 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:26.197 Zero copy mechanism will not be used. 00:26:26.197 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.455 [2024-07-10 15:50:05.583121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.455 [2024-07-10 15:50:05.695248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.390 15:50:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:27.390 15:50:06 -- common/autotest_common.sh@852 -- # return 0 00:26:27.390 15:50:06 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:27.390 15:50:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:27.390 15:50:06 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:27.390 15:50:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.390 15:50:06 -- common/autotest_common.sh@10 -- # set +x 00:26:27.390 15:50:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.390 15:50:06 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:27.390 15:50:06 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:27.956 nvme0n1 00:26:27.956 15:50:07 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:27.956 15:50:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:27.956 15:50:07 -- common/autotest_common.sh@10 -- # set +x 00:26:27.956 15:50:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:27.956 15:50:07 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:27.956 15:50:07 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:27.956 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:27.956 Zero copy mechanism will not be used. 00:26:27.956 Running I/O for 2 seconds... 00:26:27.956 [2024-07-10 15:50:07.240174] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:27.956 [2024-07-10 15:50:07.240474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.956 [2024-07-10 15:50:07.240513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:27.956 [2024-07-10 15:50:07.261063] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:27.956 [2024-07-10 15:50:07.261444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.956 [2024-07-10 15:50:07.261475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:27.956 [2024-07-10 15:50:07.280869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:27.956 [2024-07-10 15:50:07.281241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.956 [2024-07-10 15:50:07.281271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:27.956 [2024-07-10 15:50:07.302395] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:27.956 [2024-07-10 15:50:07.303117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.956 [2024-07-10 15:50:07.303161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:27.956 [2024-07-10 15:50:07.324380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:27.956 [2024-07-10 15:50:07.324835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:27.956 [2024-07-10 15:50:07.324879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.215 [2024-07-10 15:50:07.345862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.215 [2024-07-10 15:50:07.346465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.215 [2024-07-10 15:50:07.346494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.215 [2024-07-10 15:50:07.362640] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.215 [2024-07-10 15:50:07.362772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.215 [2024-07-10 15:50:07.362803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.215 [2024-07-10 15:50:07.377119] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.215 [2024-07-10 15:50:07.377662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.215 [2024-07-10 15:50:07.377692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.215 [2024-07-10 15:50:07.391616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.215 [2024-07-10 15:50:07.392053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.215 [2024-07-10 15:50:07.392082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.215 [2024-07-10 15:50:07.406788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.215 [2024-07-10 15:50:07.407216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.215 [2024-07-10 15:50:07.407244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.215 [2024-07-10 15:50:07.422579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.215 [2024-07-10 15:50:07.422896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.215 [2024-07-10 15:50:07.422924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.215 [2024-07-10 15:50:07.437051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.215 [2024-07-10 15:50:07.437445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.215 [2024-07-10 15:50:07.437489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.215 [2024-07-10 15:50:07.451446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.215 [2024-07-10 15:50:07.451810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.215 [2024-07-10 15:50:07.451838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.215 [2024-07-10 15:50:07.466565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.215 [2024-07-10 15:50:07.466906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.215 [2024-07-10 15:50:07.466935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.215 [2024-07-10 15:50:07.481811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.215 [2024-07-10 15:50:07.482175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.215 [2024-07-10 15:50:07.482203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.215 [2024-07-10 15:50:07.496784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.215 [2024-07-10 15:50:07.497208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.215 [2024-07-10 15:50:07.497237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.215 [2024-07-10 15:50:07.511529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.215 [2024-07-10 15:50:07.511954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.215 [2024-07-10 15:50:07.511983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.215 [2024-07-10 15:50:07.526288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.215 [2024-07-10 15:50:07.526702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.215 [2024-07-10 15:50:07.526743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.215 [2024-07-10 15:50:07.541150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.215 [2024-07-10 15:50:07.541509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.215 [2024-07-10 15:50:07.541537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.215 [2024-07-10 15:50:07.555553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.215 [2024-07-10 15:50:07.555897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.215 [2024-07-10 15:50:07.555927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.215 [2024-07-10 15:50:07.570010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.215 [2024-07-10 15:50:07.570395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.215 [2024-07-10 15:50:07.570423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.215 [2024-07-10 15:50:07.584744] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.215 [2024-07-10 15:50:07.585188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.215 [2024-07-10 15:50:07.585216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.474 [2024-07-10 15:50:07.600535] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.474 [2024-07-10 15:50:07.600992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.474 [2024-07-10 15:50:07.601037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.474 [2024-07-10 15:50:07.615287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.474 [2024-07-10 15:50:07.615691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.474 [2024-07-10 15:50:07.615727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.474 [2024-07-10 15:50:07.630392] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.474 [2024-07-10 15:50:07.630909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.474 [2024-07-10 15:50:07.630937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.474 [2024-07-10 15:50:07.644611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.474 [2024-07-10 15:50:07.644928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.474 [2024-07-10 15:50:07.644958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.474 [2024-07-10 15:50:07.659042] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.474 [2024-07-10 15:50:07.659452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.474 [2024-07-10 15:50:07.659480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.474 [2024-07-10 15:50:07.672136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.474 [2024-07-10 15:50:07.672362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.474 [2024-07-10 15:50:07.672391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.474 [2024-07-10 15:50:07.687371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.474 [2024-07-10 15:50:07.687735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.474 [2024-07-10 15:50:07.687764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.474 [2024-07-10 15:50:07.702895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.474 [2024-07-10 15:50:07.703281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.474 [2024-07-10 15:50:07.703309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.474 [2024-07-10 15:50:07.718754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.474 [2024-07-10 15:50:07.719174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.474 [2024-07-10 15:50:07.719201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.474 [2024-07-10 15:50:07.734484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.474 [2024-07-10 15:50:07.734977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.474 [2024-07-10 15:50:07.735007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.474 [2024-07-10 15:50:07.749801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.474 [2024-07-10 15:50:07.750072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.474 [2024-07-10 15:50:07.750101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.474 [2024-07-10 15:50:07.764273] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.474 [2024-07-10 15:50:07.764657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.474 [2024-07-10 15:50:07.764686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.474 [2024-07-10 15:50:07.780391] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.474 [2024-07-10 15:50:07.780703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.474 [2024-07-10 15:50:07.780737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.474 [2024-07-10 15:50:07.794811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.474 [2024-07-10 15:50:07.795263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.474 [2024-07-10 15:50:07.795306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.474 [2024-07-10 15:50:07.810854] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.474 [2024-07-10 15:50:07.811175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.474 [2024-07-10 15:50:07.811203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.474 [2024-07-10 15:50:07.824343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.474 [2024-07-10 15:50:07.824945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.474 [2024-07-10 15:50:07.824974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.474 [2024-07-10 15:50:07.840150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.474 [2024-07-10 15:50:07.840524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.474 [2024-07-10 15:50:07.840553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.732 [2024-07-10 15:50:07.855476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.732 [2024-07-10 15:50:07.855926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-07-10 15:50:07.855957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.732 [2024-07-10 15:50:07.870797] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.732 [2024-07-10 15:50:07.871188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-07-10 15:50:07.871217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.732 [2024-07-10 15:50:07.885468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.732 [2024-07-10 15:50:07.885898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-07-10 15:50:07.885926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.732 [2024-07-10 15:50:07.901246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.732 [2024-07-10 15:50:07.901616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-07-10 15:50:07.901646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.732 [2024-07-10 15:50:07.916277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.732 [2024-07-10 15:50:07.916732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-07-10 15:50:07.916761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.732 [2024-07-10 15:50:07.931884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.732 [2024-07-10 15:50:07.932320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-07-10 15:50:07.932348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.732 [2024-07-10 15:50:07.946674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.732 [2024-07-10 15:50:07.947151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-07-10 15:50:07.947179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.732 [2024-07-10 15:50:07.961591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.732 [2024-07-10 15:50:07.961831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.732 [2024-07-10 15:50:07.961860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.733 [2024-07-10 15:50:07.977106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.733 [2024-07-10 15:50:07.977655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-07-10 15:50:07.977684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.733 [2024-07-10 15:50:07.992546] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.733 [2024-07-10 15:50:07.992960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-07-10 15:50:07.992988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.733 [2024-07-10 15:50:08.006743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.733 [2024-07-10 15:50:08.007035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-07-10 15:50:08.007074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.733 [2024-07-10 15:50:08.022478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.733 [2024-07-10 15:50:08.022940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-07-10 15:50:08.022969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.733 [2024-07-10 15:50:08.037064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.733 [2024-07-10 15:50:08.037441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-07-10 15:50:08.037469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.733 [2024-07-10 15:50:08.052581] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.733 [2024-07-10 15:50:08.052883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-07-10 15:50:08.052912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.733 [2024-07-10 15:50:08.067481] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.733 [2024-07-10 15:50:08.067853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-07-10 15:50:08.067881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.733 [2024-07-10 15:50:08.081579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.733 [2024-07-10 15:50:08.081983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-07-10 15:50:08.082011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.733 [2024-07-10 15:50:08.096748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.733 [2024-07-10 15:50:08.097089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.733 [2024-07-10 15:50:08.097118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.991 [2024-07-10 15:50:08.111208] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.991 [2024-07-10 15:50:08.111546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.991 [2024-07-10 15:50:08.111577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.991 [2024-07-10 15:50:08.126455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.991 [2024-07-10 15:50:08.126934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.991 [2024-07-10 15:50:08.126963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.991 [2024-07-10 15:50:08.141901] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.991 [2024-07-10 15:50:08.142263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.991 [2024-07-10 15:50:08.142292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.991 [2024-07-10 15:50:08.157493] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.991 [2024-07-10 15:50:08.157804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.991 [2024-07-10 15:50:08.157833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.991 [2024-07-10 15:50:08.173067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.991 [2024-07-10 15:50:08.173413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.991 [2024-07-10 15:50:08.173449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.991 [2024-07-10 15:50:08.187846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.991 [2024-07-10 15:50:08.188306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.991 [2024-07-10 15:50:08.188334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.991 [2024-07-10 15:50:08.203643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.991 [2024-07-10 15:50:08.203984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.991 [2024-07-10 15:50:08.204012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.991 [2024-07-10 15:50:08.218272] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.992 [2024-07-10 15:50:08.218458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.992 [2024-07-10 15:50:08.218487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.992 [2024-07-10 15:50:08.233767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.992 [2024-07-10 15:50:08.234156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.992 [2024-07-10 15:50:08.234185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.992 [2024-07-10 15:50:08.248994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.992 [2024-07-10 15:50:08.249371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.992 [2024-07-10 15:50:08.249400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.992 [2024-07-10 15:50:08.264295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.992 [2024-07-10 15:50:08.264604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.992 [2024-07-10 15:50:08.264633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.992 [2024-07-10 15:50:08.278193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.992 [2024-07-10 15:50:08.278620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.992 [2024-07-10 15:50:08.278650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.992 [2024-07-10 15:50:08.292858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.992 [2024-07-10 15:50:08.293165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.992 [2024-07-10 15:50:08.293193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:28.992 [2024-07-10 15:50:08.308170] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.992 [2024-07-10 15:50:08.308527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.992 [2024-07-10 15:50:08.308555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:28.992 [2024-07-10 15:50:08.322543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.992 [2024-07-10 15:50:08.322937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.992 [2024-07-10 15:50:08.322966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:28.992 [2024-07-10 15:50:08.337388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.992 [2024-07-10 15:50:08.337680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.992 [2024-07-10 15:50:08.337708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:28.992 [2024-07-10 15:50:08.352278] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:28.992 [2024-07-10 15:50:08.352627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.992 [2024-07-10 15:50:08.352656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.250 [2024-07-10 15:50:08.368280] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.250 [2024-07-10 15:50:08.368713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-10 15:50:08.368745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.250 [2024-07-10 15:50:08.383138] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.250 [2024-07-10 15:50:08.383545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-10 15:50:08.383576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.250 [2024-07-10 15:50:08.397960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.250 [2024-07-10 15:50:08.398405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-10 15:50:08.398462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.250 [2024-07-10 15:50:08.413262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.250 [2024-07-10 15:50:08.413679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-10 15:50:08.413708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.250 [2024-07-10 15:50:08.428412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.250 [2024-07-10 15:50:08.428765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.250 [2024-07-10 15:50:08.428794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.250 [2024-07-10 15:50:08.442765] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.251 [2024-07-10 15:50:08.443161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-10 15:50:08.443190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.251 [2024-07-10 15:50:08.458591] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.251 [2024-07-10 15:50:08.459020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-10 15:50:08.459048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.251 [2024-07-10 15:50:08.473166] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.251 [2024-07-10 15:50:08.473606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-10 15:50:08.473636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.251 [2024-07-10 15:50:08.489046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.251 [2024-07-10 15:50:08.489504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-10 15:50:08.489533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.251 [2024-07-10 15:50:08.505110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.251 [2024-07-10 15:50:08.505592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-10 15:50:08.505622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.251 [2024-07-10 15:50:08.519207] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.251 [2024-07-10 15:50:08.519575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-10 15:50:08.519604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.251 [2024-07-10 15:50:08.534560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.251 [2024-07-10 15:50:08.535029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-10 15:50:08.535057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.251 [2024-07-10 15:50:08.550282] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.251 [2024-07-10 15:50:08.550635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-10 15:50:08.550664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.251 [2024-07-10 15:50:08.565737] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.251 [2024-07-10 15:50:08.566166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-10 15:50:08.566194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.251 [2024-07-10 15:50:08.581102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.251 [2024-07-10 15:50:08.581562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-10 15:50:08.581591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.251 [2024-07-10 15:50:08.596146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.251 [2024-07-10 15:50:08.596753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-10 15:50:08.596796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.251 [2024-07-10 15:50:08.611400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.251 [2024-07-10 15:50:08.611844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-10 15:50:08.611873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.251 [2024-07-10 15:50:08.625761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.251 [2024-07-10 15:50:08.626099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.251 [2024-07-10 15:50:08.626129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.510 [2024-07-10 15:50:08.640480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.510 [2024-07-10 15:50:08.640867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.510 [2024-07-10 15:50:08.640898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.510 [2024-07-10 15:50:08.656805] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.510 [2024-07-10 15:50:08.657230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.510 [2024-07-10 15:50:08.657258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.510 [2024-07-10 15:50:08.671612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.510 [2024-07-10 15:50:08.672028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.510 [2024-07-10 15:50:08.672056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.510 [2024-07-10 15:50:08.687516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.510 [2024-07-10 15:50:08.687867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.510 [2024-07-10 15:50:08.687895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.510 [2024-07-10 15:50:08.702771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.510 [2024-07-10 15:50:08.703186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.510 [2024-07-10 15:50:08.703230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.510 [2024-07-10 15:50:08.718213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.510 [2024-07-10 15:50:08.718703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.510 [2024-07-10 15:50:08.718733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.510 [2024-07-10 15:50:08.733751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.510 [2024-07-10 15:50:08.734116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.510 [2024-07-10 15:50:08.734145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.510 [2024-07-10 15:50:08.749078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.510 [2024-07-10 15:50:08.749445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.510 [2024-07-10 15:50:08.749474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.510 [2024-07-10 15:50:08.764165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.510 [2024-07-10 15:50:08.764521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.510 [2024-07-10 15:50:08.764551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.510 [2024-07-10 15:50:08.779220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.510 [2024-07-10 15:50:08.779680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.510 [2024-07-10 15:50:08.779709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.510 [2024-07-10 15:50:08.793748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.510 [2024-07-10 15:50:08.794134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.510 [2024-07-10 15:50:08.794168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.510 [2024-07-10 15:50:08.809936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.510 [2024-07-10 15:50:08.810328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.510 [2024-07-10 15:50:08.810357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.510 [2024-07-10 15:50:08.824740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.510 [2024-07-10 15:50:08.825167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.510 [2024-07-10 15:50:08.825197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.510 [2024-07-10 15:50:08.840399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.510 [2024-07-10 15:50:08.840787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.510 [2024-07-10 15:50:08.840817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.510 [2024-07-10 15:50:08.855468] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.510 [2024-07-10 15:50:08.855851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.510 [2024-07-10 15:50:08.855896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.510 [2024-07-10 15:50:08.870775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.510 [2024-07-10 15:50:08.871141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.510 [2024-07-10 15:50:08.871170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.510 [2024-07-10 15:50:08.885295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.510 [2024-07-10 15:50:08.885707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.510 [2024-07-10 15:50:08.885741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.769 [2024-07-10 15:50:08.899187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.769 [2024-07-10 15:50:08.899543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.769 [2024-07-10 15:50:08.899574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.769 [2024-07-10 15:50:08.914433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.769 [2024-07-10 15:50:08.914689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.769 [2024-07-10 15:50:08.914717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.769 [2024-07-10 15:50:08.930152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.769 [2024-07-10 15:50:08.930587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.769 [2024-07-10 15:50:08.930629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.769 [2024-07-10 15:50:08.945089] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.769 [2024-07-10 15:50:08.945533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.769 [2024-07-10 15:50:08.945561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.769 [2024-07-10 15:50:08.960222] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.769 [2024-07-10 15:50:08.960650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.769 [2024-07-10 15:50:08.960679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.769 [2024-07-10 15:50:08.975607] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.769 [2024-07-10 15:50:08.976030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.769 [2024-07-10 15:50:08.976058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.769 [2024-07-10 15:50:08.990484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.769 [2024-07-10 15:50:08.990834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.769 [2024-07-10 15:50:08.990862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.769 [2024-07-10 15:50:09.006165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.769 [2024-07-10 15:50:09.006466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.769 [2024-07-10 15:50:09.006494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.769 [2024-07-10 15:50:09.020826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.769 [2024-07-10 15:50:09.021175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.769 [2024-07-10 15:50:09.021203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.769 [2024-07-10 15:50:09.036683] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.769 [2024-07-10 15:50:09.037093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.769 [2024-07-10 15:50:09.037123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.769 [2024-07-10 15:50:09.051451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.769 [2024-07-10 15:50:09.051836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.769 [2024-07-10 15:50:09.051864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.769 [2024-07-10 15:50:09.067460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.769 [2024-07-10 15:50:09.067916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.769 [2024-07-10 15:50:09.067943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.769 [2024-07-10 15:50:09.081947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.769 [2024-07-10 15:50:09.082300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.769 [2024-07-10 15:50:09.082329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:29.769 [2024-07-10 15:50:09.097771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.769 [2024-07-10 15:50:09.098156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.769 [2024-07-10 15:50:09.098185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:29.769 [2024-07-10 15:50:09.112912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.769 [2024-07-10 15:50:09.113303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.769 [2024-07-10 15:50:09.113331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:29.769 [2024-07-10 15:50:09.128033] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.769 [2024-07-10 15:50:09.128411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.769 [2024-07-10 15:50:09.128447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.769 [2024-07-10 15:50:09.143560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:29.769 [2024-07-10 15:50:09.143910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.769 [2024-07-10 15:50:09.143941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.027 [2024-07-10 15:50:09.157808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:30.027 [2024-07-10 15:50:09.158104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.027 [2024-07-10 15:50:09.158135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.027 [2024-07-10 15:50:09.172877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:30.027 [2024-07-10 15:50:09.173353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.027 [2024-07-10 15:50:09.173381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:30.027 [2024-07-10 15:50:09.188288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:30.027 [2024-07-10 15:50:09.188657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.027 [2024-07-10 15:50:09.188692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.027 [2024-07-10 15:50:09.202828] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:30.027 [2024-07-10 15:50:09.203291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.027 [2024-07-10 15:50:09.203320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:30.027 [2024-07-10 15:50:09.218563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xed1210) with pdu=0x2000190fef90 00:26:30.027 [2024-07-10 15:50:09.218910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.027 [2024-07-10 15:50:09.218938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:30.027 00:26:30.027 Latency(us) 00:26:30.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.027 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:30.027 nvme0n1 : 2.01 2021.63 252.70 0.00 0.00 7893.79 5631.24 22816.24 00:26:30.027 =================================================================================================================== 00:26:30.027 Total : 2021.63 252.70 0.00 0.00 7893.79 5631.24 22816.24 00:26:30.027 0 00:26:30.027 15:50:09 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:30.027 15:50:09 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:30.027 15:50:09 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:30.028 | .driver_specific 00:26:30.028 | .nvme_error 00:26:30.028 | .status_code 00:26:30.028 | .command_transient_transport_error' 00:26:30.028 15:50:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:30.285 15:50:09 -- host/digest.sh@71 -- # (( 130 > 0 )) 00:26:30.286 15:50:09 -- host/digest.sh@73 -- # killprocess 2227392 00:26:30.286 15:50:09 -- common/autotest_common.sh@926 -- # '[' -z 2227392 ']' 00:26:30.286 15:50:09 -- common/autotest_common.sh@930 -- # kill -0 2227392 00:26:30.286 15:50:09 -- common/autotest_common.sh@931 -- # uname 00:26:30.286 15:50:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:30.286 15:50:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2227392 00:26:30.286 15:50:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:30.286 15:50:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:30.286 15:50:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2227392' 00:26:30.286 killing process with pid 2227392 00:26:30.286 15:50:09 -- common/autotest_common.sh@945 -- # kill 2227392 00:26:30.286 Received shutdown signal, test time was about 2.000000 seconds 00:26:30.286 00:26:30.286 Latency(us) 00:26:30.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.286 =================================================================================================================== 00:26:30.286 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:30.286 15:50:09 -- common/autotest_common.sh@950 -- # wait 2227392 00:26:30.544 15:50:09 -- host/digest.sh@115 -- # killprocess 2225698 00:26:30.544 15:50:09 -- common/autotest_common.sh@926 -- # '[' -z 2225698 ']' 00:26:30.544 15:50:09 -- common/autotest_common.sh@930 -- # kill -0 2225698 00:26:30.544 15:50:09 -- common/autotest_common.sh@931 -- # uname 00:26:30.544 15:50:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:30.544 15:50:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2225698 00:26:30.544 15:50:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:30.544 15:50:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:30.544 15:50:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2225698' 00:26:30.544 killing process with pid 2225698 00:26:30.544 15:50:09 -- common/autotest_common.sh@945 -- # kill 2225698 00:26:30.544 15:50:09 -- common/autotest_common.sh@950 -- # wait 2225698 00:26:30.803 00:26:30.803 real 0m18.238s 00:26:30.803 user 0m37.233s 00:26:30.803 sys 0m4.149s 00:26:30.803 15:50:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:30.803 15:50:10 -- common/autotest_common.sh@10 -- # set +x 00:26:30.803 ************************************ 00:26:30.803 END TEST nvmf_digest_error 00:26:30.803 ************************************ 00:26:30.803 15:50:10 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:26:30.803 15:50:10 -- host/digest.sh@139 -- # nvmftestfini 00:26:30.803 15:50:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:30.803 15:50:10 -- nvmf/common.sh@116 -- # sync 00:26:30.803 15:50:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:30.803 15:50:10 -- nvmf/common.sh@119 -- # set +e 00:26:30.803 15:50:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:30.803 15:50:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:30.803 rmmod nvme_tcp 00:26:30.803 rmmod nvme_fabrics 00:26:30.803 rmmod nvme_keyring 00:26:30.803 15:50:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:30.803 15:50:10 -- nvmf/common.sh@123 -- # set -e 00:26:30.803 15:50:10 -- nvmf/common.sh@124 -- # return 0 00:26:30.803 15:50:10 -- nvmf/common.sh@477 -- # '[' -n 2225698 ']' 00:26:30.803 15:50:10 -- nvmf/common.sh@478 -- # killprocess 2225698 00:26:30.803 15:50:10 -- common/autotest_common.sh@926 -- # '[' -z 2225698 ']' 00:26:30.803 15:50:10 -- common/autotest_common.sh@930 -- # kill -0 2225698 00:26:30.803 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2225698) - No such process 00:26:30.803 15:50:10 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2225698 is not found' 00:26:30.803 Process with pid 2225698 is not found 00:26:30.803 15:50:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:30.803 15:50:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:30.803 15:50:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:30.803 15:50:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:30.803 15:50:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:30.803 15:50:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:30.803 15:50:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:30.803 15:50:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.334 15:50:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:33.334 00:26:33.334 real 0m37.869s 00:26:33.334 user 1m6.712s 00:26:33.334 sys 0m9.929s 00:26:33.334 15:50:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:33.334 15:50:12 -- common/autotest_common.sh@10 -- # set +x 00:26:33.334 ************************************ 00:26:33.334 END TEST nvmf_digest 00:26:33.334 ************************************ 00:26:33.334 15:50:12 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:26:33.334 15:50:12 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:26:33.334 15:50:12 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:26:33.334 15:50:12 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:33.334 15:50:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:33.334 15:50:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:33.334 15:50:12 -- common/autotest_common.sh@10 -- # set +x 00:26:33.334 ************************************ 00:26:33.334 START TEST nvmf_bdevperf 00:26:33.334 ************************************ 00:26:33.334 15:50:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:33.334 * Looking for test storage... 00:26:33.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:33.334 15:50:12 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:33.334 15:50:12 -- nvmf/common.sh@7 -- # uname -s 00:26:33.334 15:50:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:33.334 15:50:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:33.334 15:50:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:33.334 15:50:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:33.335 15:50:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:33.335 15:50:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:33.335 15:50:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:33.335 15:50:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:33.335 15:50:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:33.335 15:50:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:33.335 15:50:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:33.335 15:50:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:33.335 15:50:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:33.335 15:50:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:33.335 15:50:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:33.335 15:50:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:33.335 15:50:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:33.335 15:50:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:33.335 15:50:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:33.335 15:50:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.335 15:50:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.335 15:50:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.335 15:50:12 -- paths/export.sh@5 -- # export PATH 00:26:33.335 15:50:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:33.335 15:50:12 -- nvmf/common.sh@46 -- # : 0 00:26:33.335 15:50:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:33.335 15:50:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:33.335 15:50:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:33.335 15:50:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:33.335 15:50:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:33.335 15:50:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:33.335 15:50:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:33.335 15:50:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:33.335 15:50:12 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:33.335 15:50:12 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:33.335 15:50:12 -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:33.335 15:50:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:33.335 15:50:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:33.335 15:50:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:33.335 15:50:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:33.335 15:50:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:33.335 15:50:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.335 15:50:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:33.335 15:50:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:33.335 15:50:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:33.335 15:50:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:33.335 15:50:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:33.335 15:50:12 -- common/autotest_common.sh@10 -- # set +x 00:26:35.235 15:50:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:35.235 15:50:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:35.235 15:50:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:35.235 15:50:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:35.235 15:50:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:35.235 15:50:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:35.235 15:50:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:35.235 15:50:14 -- nvmf/common.sh@294 -- # net_devs=() 00:26:35.235 15:50:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:35.235 15:50:14 -- nvmf/common.sh@295 -- # e810=() 00:26:35.235 15:50:14 -- nvmf/common.sh@295 -- # local -ga e810 00:26:35.235 15:50:14 -- nvmf/common.sh@296 -- # x722=() 00:26:35.235 15:50:14 -- nvmf/common.sh@296 -- # local -ga x722 00:26:35.235 15:50:14 -- nvmf/common.sh@297 -- # mlx=() 00:26:35.235 15:50:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:35.235 15:50:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:35.235 15:50:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:35.235 15:50:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:35.235 15:50:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:35.235 15:50:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:35.235 15:50:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:35.235 15:50:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:35.235 15:50:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:35.235 15:50:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:35.235 15:50:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:35.235 15:50:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:35.235 15:50:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:35.235 15:50:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:35.235 15:50:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:35.235 15:50:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:35.235 15:50:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:35.235 15:50:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:35.235 15:50:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:35.235 15:50:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:35.235 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:35.235 15:50:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:35.235 15:50:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:35.235 15:50:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.235 15:50:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.235 15:50:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:35.235 15:50:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:35.235 15:50:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:35.235 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:35.235 15:50:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:35.235 15:50:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:35.235 15:50:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:35.236 15:50:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:35.236 15:50:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:35.236 15:50:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:35.236 15:50:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:35.236 15:50:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:35.236 15:50:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:35.236 15:50:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.236 15:50:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:35.236 15:50:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.236 15:50:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:35.236 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:35.236 15:50:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.236 15:50:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:35.236 15:50:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:35.236 15:50:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:35.236 15:50:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:35.236 15:50:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:35.236 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:35.236 15:50:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:35.236 15:50:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:35.236 15:50:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:35.236 15:50:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:35.236 15:50:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:35.236 15:50:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:35.236 15:50:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:35.236 15:50:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:35.236 15:50:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:35.236 15:50:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:35.236 15:50:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:35.236 15:50:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:35.236 15:50:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:35.236 15:50:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:35.236 15:50:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:35.236 15:50:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:35.236 15:50:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:35.236 15:50:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:35.236 15:50:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:35.236 15:50:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:35.236 15:50:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:35.236 15:50:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:35.236 15:50:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:35.236 15:50:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:35.236 15:50:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:35.236 15:50:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:35.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:35.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:26:35.236 00:26:35.236 --- 10.0.0.2 ping statistics --- 00:26:35.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.236 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:26:35.236 15:50:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:35.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:35.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:26:35.236 00:26:35.236 --- 10.0.0.1 ping statistics --- 00:26:35.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:35.236 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:26:35.236 15:50:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:35.236 15:50:14 -- nvmf/common.sh@410 -- # return 0 00:26:35.236 15:50:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:35.236 15:50:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:35.236 15:50:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:35.236 15:50:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:35.236 15:50:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:35.236 15:50:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:35.236 15:50:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:35.236 15:50:14 -- host/bdevperf.sh@25 -- # tgt_init 00:26:35.236 15:50:14 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:35.236 15:50:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:35.236 15:50:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:35.236 15:50:14 -- common/autotest_common.sh@10 -- # set +x 00:26:35.236 15:50:14 -- nvmf/common.sh@469 -- # nvmfpid=2229849 00:26:35.236 15:50:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:35.236 15:50:14 -- nvmf/common.sh@470 -- # waitforlisten 2229849 00:26:35.236 15:50:14 -- common/autotest_common.sh@819 -- # '[' -z 2229849 ']' 00:26:35.236 15:50:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:35.236 15:50:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:35.236 15:50:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:35.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:35.236 15:50:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:35.236 15:50:14 -- common/autotest_common.sh@10 -- # set +x 00:26:35.236 [2024-07-10 15:50:14.517871] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:35.236 [2024-07-10 15:50:14.517948] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:35.236 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.236 [2024-07-10 15:50:14.585510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:35.494 [2024-07-10 15:50:14.700132] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:35.494 [2024-07-10 15:50:14.700288] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:35.494 [2024-07-10 15:50:14.700321] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:35.494 [2024-07-10 15:50:14.700334] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:35.494 [2024-07-10 15:50:14.700672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:35.494 [2024-07-10 15:50:14.704448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:35.494 [2024-07-10 15:50:14.704460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.425 15:50:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:36.425 15:50:15 -- common/autotest_common.sh@852 -- # return 0 00:26:36.425 15:50:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:36.425 15:50:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:36.425 15:50:15 -- common/autotest_common.sh@10 -- # set +x 00:26:36.425 15:50:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:36.425 15:50:15 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:36.425 15:50:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.425 15:50:15 -- common/autotest_common.sh@10 -- # set +x 00:26:36.425 [2024-07-10 15:50:15.536625] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:36.426 15:50:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.426 15:50:15 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:36.426 15:50:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.426 15:50:15 -- common/autotest_common.sh@10 -- # set +x 00:26:36.426 Malloc0 00:26:36.426 15:50:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.426 15:50:15 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:36.426 15:50:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.426 15:50:15 -- common/autotest_common.sh@10 -- # set +x 00:26:36.426 15:50:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.426 15:50:15 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:36.426 15:50:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.426 15:50:15 -- common/autotest_common.sh@10 -- # set +x 00:26:36.426 15:50:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.426 15:50:15 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:36.426 15:50:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:36.426 15:50:15 -- common/autotest_common.sh@10 -- # set +x 00:26:36.426 [2024-07-10 15:50:15.597006] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:36.426 15:50:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:36.426 15:50:15 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:36.426 15:50:15 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:36.426 15:50:15 -- nvmf/common.sh@520 -- # config=() 00:26:36.426 15:50:15 -- nvmf/common.sh@520 -- # local subsystem config 00:26:36.426 15:50:15 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:36.426 15:50:15 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:36.426 { 00:26:36.426 "params": { 00:26:36.426 "name": "Nvme$subsystem", 00:26:36.426 "trtype": "$TEST_TRANSPORT", 00:26:36.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:36.426 "adrfam": "ipv4", 00:26:36.426 "trsvcid": "$NVMF_PORT", 00:26:36.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:36.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:36.426 "hdgst": ${hdgst:-false}, 00:26:36.426 "ddgst": ${ddgst:-false} 00:26:36.426 }, 00:26:36.426 "method": "bdev_nvme_attach_controller" 00:26:36.426 } 00:26:36.426 EOF 00:26:36.426 )") 00:26:36.426 15:50:15 -- nvmf/common.sh@542 -- # cat 00:26:36.426 15:50:15 -- nvmf/common.sh@544 -- # jq . 00:26:36.426 15:50:15 -- nvmf/common.sh@545 -- # IFS=, 00:26:36.426 15:50:15 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:36.426 "params": { 00:26:36.426 "name": "Nvme1", 00:26:36.426 "trtype": "tcp", 00:26:36.426 "traddr": "10.0.0.2", 00:26:36.426 "adrfam": "ipv4", 00:26:36.426 "trsvcid": "4420", 00:26:36.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:36.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:36.426 "hdgst": false, 00:26:36.426 "ddgst": false 00:26:36.426 }, 00:26:36.426 "method": "bdev_nvme_attach_controller" 00:26:36.426 }' 00:26:36.426 [2024-07-10 15:50:15.643543] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:36.426 [2024-07-10 15:50:15.643624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229997 ] 00:26:36.426 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.426 [2024-07-10 15:50:15.706169] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:36.683 [2024-07-10 15:50:15.818789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:36.940 Running I/O for 1 seconds... 00:26:37.872 00:26:37.872 Latency(us) 00:26:37.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.872 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:37.872 Verification LBA range: start 0x0 length 0x4000 00:26:37.872 Nvme1n1 : 1.00 13101.89 51.18 0.00 0.00 9731.84 1195.43 15631.55 00:26:37.872 =================================================================================================================== 00:26:37.872 Total : 13101.89 51.18 0.00 0.00 9731.84 1195.43 15631.55 00:26:38.130 15:50:17 -- host/bdevperf.sh@30 -- # bdevperfpid=2230204 00:26:38.130 15:50:17 -- host/bdevperf.sh@32 -- # sleep 3 00:26:38.130 15:50:17 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:38.130 15:50:17 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:38.130 15:50:17 -- nvmf/common.sh@520 -- # config=() 00:26:38.130 15:50:17 -- nvmf/common.sh@520 -- # local subsystem config 00:26:38.130 15:50:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:38.130 15:50:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:38.130 { 00:26:38.130 "params": { 00:26:38.130 "name": "Nvme$subsystem", 00:26:38.130 "trtype": "$TEST_TRANSPORT", 00:26:38.130 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:38.130 "adrfam": "ipv4", 00:26:38.130 "trsvcid": "$NVMF_PORT", 00:26:38.130 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:38.130 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:38.130 "hdgst": ${hdgst:-false}, 00:26:38.130 "ddgst": ${ddgst:-false} 00:26:38.130 }, 00:26:38.130 "method": "bdev_nvme_attach_controller" 00:26:38.130 } 00:26:38.130 EOF 00:26:38.130 )") 00:26:38.130 15:50:17 -- nvmf/common.sh@542 -- # cat 00:26:38.130 15:50:17 -- nvmf/common.sh@544 -- # jq . 00:26:38.130 15:50:17 -- nvmf/common.sh@545 -- # IFS=, 00:26:38.130 15:50:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:38.130 "params": { 00:26:38.130 "name": "Nvme1", 00:26:38.130 "trtype": "tcp", 00:26:38.130 "traddr": "10.0.0.2", 00:26:38.130 "adrfam": "ipv4", 00:26:38.130 "trsvcid": "4420", 00:26:38.130 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:38.130 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:38.130 "hdgst": false, 00:26:38.130 "ddgst": false 00:26:38.130 }, 00:26:38.130 "method": "bdev_nvme_attach_controller" 00:26:38.130 }' 00:26:38.130 [2024-07-10 15:50:17.471828] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:38.130 [2024-07-10 15:50:17.471910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2230204 ] 00:26:38.130 EAL: No free 2048 kB hugepages reported on node 1 00:26:38.388 [2024-07-10 15:50:17.532339] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.388 [2024-07-10 15:50:17.637358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:38.645 Running I/O for 15 seconds... 00:26:41.174 15:50:20 -- host/bdevperf.sh@33 -- # kill -9 2229849 00:26:41.174 15:50:20 -- host/bdevperf.sh@35 -- # sleep 3 00:26:41.174 [2024-07-10 15:50:20.448942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.174 [2024-07-10 15:50:20.448997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.174 [2024-07-10 15:50:20.449033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.174 [2024-07-10 15:50:20.449056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.174 [2024-07-10 15:50:20.449078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.174 [2024-07-10 15:50:20.449095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.174 [2024-07-10 15:50:20.449115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.174 [2024-07-10 15:50:20.449132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.174 [2024-07-10 15:50:20.449152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.174 [2024-07-10 15:50:20.449170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.174 [2024-07-10 15:50:20.449189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.174 [2024-07-10 15:50:20.449207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.174 [2024-07-10 15:50:20.449227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:12856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.174 [2024-07-10 15:50:20.449244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.174 [2024-07-10 15:50:20.449262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.174 [2024-07-10 15:50:20.449280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.174 [2024-07-10 15:50:20.449298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.174 [2024-07-10 15:50:20.449315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.449334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.449353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.449372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.449388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.449406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.449439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.449461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.449497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.449516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.449532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.449549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:13472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.449565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.449581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.449595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.449611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.449625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.449642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.449659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.449676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:12944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.449694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.449735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.449753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.449778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.449800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.449822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.449838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.449855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:13080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.449871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.449888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.449904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.449925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.449941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.449958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.449974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.449991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.450006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.450039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.450071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:13632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.450104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.175 [2024-07-10 15:50:20.450137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:13664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.175 [2024-07-10 15:50:20.450169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.450201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.450234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.450268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.450300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:13144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.450336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.450369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.450401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:13208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.450442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.450490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:13264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.450519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.450549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.175 [2024-07-10 15:50:20.450578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.175 [2024-07-10 15:50:20.450607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:13712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.175 [2024-07-10 15:50:20.450636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:13720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.175 [2024-07-10 15:50:20.450665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.175 [2024-07-10 15:50:20.450722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.450756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:13744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.175 [2024-07-10 15:50:20.450792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.175 [2024-07-10 15:50:20.450827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.175 [2024-07-10 15:50:20.450844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.175 [2024-07-10 15:50:20.450859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.450877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:13768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.450892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.450909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.176 [2024-07-10 15:50:20.450925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.450941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.176 [2024-07-10 15:50:20.450957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.450973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:13792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.176 [2024-07-10 15:50:20.450988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:13816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.176 [2024-07-10 15:50:20.451085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:13832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.176 [2024-07-10 15:50:20.451149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.176 [2024-07-10 15:50:20.451180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.176 [2024-07-10 15:50:20.451282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:13872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.176 [2024-07-10 15:50:20.451315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:13312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:13896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:13912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:13920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:13936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.176 [2024-07-10 15:50:20.451936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.451969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.451986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.452001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.452018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.452034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.452055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.452071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.452088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.452103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.452120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:14000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.452136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.452152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.176 [2024-07-10 15:50:20.452168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.452184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:14016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.176 [2024-07-10 15:50:20.452200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.452217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.452232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.452249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:14032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.176 [2024-07-10 15:50:20.452265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.176 [2024-07-10 15:50:20.452282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.176 [2024-07-10 15:50:20.452297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.452314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.452330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.452347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.177 [2024-07-10 15:50:20.452362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.452380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.177 [2024-07-10 15:50:20.452396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.452414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.452438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.452471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.452493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.452510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.452525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.452540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.452554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.452569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.452583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.452599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:13360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.452612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.452628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.452642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.452657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:13384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.452671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.452693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:13408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.452723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.452741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.452757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.452774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.452789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.452806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.452821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.452838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:14104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.177 [2024-07-10 15:50:20.452854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.452870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:14112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.452885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.452902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.452922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.452939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.452955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.452972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.452987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.453004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.453019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.453036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.453052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.453069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.453084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.453101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.453116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.453133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.453148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.453165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:13536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.453181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.453198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.453213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.453230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:13584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.453245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.453262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.453278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.453295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.453310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.453330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.453346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.453363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.177 [2024-07-10 15:50:20.453379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.453395] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ed3a0 is same with the state(5) to be set 00:26:41.177 [2024-07-10 15:50:20.453413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:41.177 [2024-07-10 15:50:20.453636] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:41.177 [2024-07-10 15:50:20.453657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13648 len:8 PRP1 0x0 PRP2 0x0 00:26:41.177 [2024-07-10 15:50:20.453671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.453756] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x5ed3a0 was disconnected and freed. reset controller. 00:26:41.177 [2024-07-10 15:50:20.453850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.177 [2024-07-10 15:50:20.453875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.453895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.177 [2024-07-10 15:50:20.453911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.453928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.177 [2024-07-10 15:50:20.453943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.453959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:41.177 [2024-07-10 15:50:20.453975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.177 [2024-07-10 15:50:20.453989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.177 [2024-07-10 15:50:20.456437] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.177 [2024-07-10 15:50:20.456503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.177 [2024-07-10 15:50:20.457038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-07-10 15:50:20.457363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.177 [2024-07-10 15:50:20.457417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.177 [2024-07-10 15:50:20.457445] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.177 [2024-07-10 15:50:20.457614] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.177 [2024-07-10 15:50:20.457739] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.177 [2024-07-10 15:50:20.457767] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.178 [2024-07-10 15:50:20.457786] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.178 [2024-07-10 15:50:20.460270] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.178 [2024-07-10 15:50:20.469365] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.178 [2024-07-10 15:50:20.469731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-07-10 15:50:20.469900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-07-10 15:50:20.469929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.178 [2024-07-10 15:50:20.469947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.178 [2024-07-10 15:50:20.470168] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.178 [2024-07-10 15:50:20.470355] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.178 [2024-07-10 15:50:20.470379] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.178 [2024-07-10 15:50:20.470395] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.178 [2024-07-10 15:50:20.472694] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.178 [2024-07-10 15:50:20.481888] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.178 [2024-07-10 15:50:20.482211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-07-10 15:50:20.482500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-07-10 15:50:20.482530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.178 [2024-07-10 15:50:20.482548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.178 [2024-07-10 15:50:20.482713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.178 [2024-07-10 15:50:20.482845] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.178 [2024-07-10 15:50:20.482869] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.178 [2024-07-10 15:50:20.482885] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.178 [2024-07-10 15:50:20.485283] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.178 [2024-07-10 15:50:20.494321] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.178 [2024-07-10 15:50:20.494735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-07-10 15:50:20.494974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-07-10 15:50:20.495024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.178 [2024-07-10 15:50:20.495042] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.178 [2024-07-10 15:50:20.495172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.178 [2024-07-10 15:50:20.495340] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.178 [2024-07-10 15:50:20.495364] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.178 [2024-07-10 15:50:20.495385] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.178 [2024-07-10 15:50:20.497698] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.178 [2024-07-10 15:50:20.506912] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.178 [2024-07-10 15:50:20.507399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-07-10 15:50:20.507637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-07-10 15:50:20.507664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.178 [2024-07-10 15:50:20.507680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.178 [2024-07-10 15:50:20.507795] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.178 [2024-07-10 15:50:20.507975] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.178 [2024-07-10 15:50:20.507999] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.178 [2024-07-10 15:50:20.508014] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.178 [2024-07-10 15:50:20.510255] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.178 [2024-07-10 15:50:20.519690] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.178 [2024-07-10 15:50:20.520063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-07-10 15:50:20.520346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-07-10 15:50:20.520372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.178 [2024-07-10 15:50:20.520402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.178 [2024-07-10 15:50:20.520613] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.178 [2024-07-10 15:50:20.520819] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.178 [2024-07-10 15:50:20.520844] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.178 [2024-07-10 15:50:20.520859] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.178 [2024-07-10 15:50:20.523190] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.178 [2024-07-10 15:50:20.532356] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.178 [2024-07-10 15:50:20.532714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-07-10 15:50:20.532951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-07-10 15:50:20.533022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.178 [2024-07-10 15:50:20.533040] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.178 [2024-07-10 15:50:20.533241] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.178 [2024-07-10 15:50:20.533374] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.178 [2024-07-10 15:50:20.533398] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.178 [2024-07-10 15:50:20.533413] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.178 [2024-07-10 15:50:20.535827] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.178 [2024-07-10 15:50:20.544948] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.178 [2024-07-10 15:50:20.545375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-07-10 15:50:20.545577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.178 [2024-07-10 15:50:20.545607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.178 [2024-07-10 15:50:20.545626] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.178 [2024-07-10 15:50:20.545773] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.178 [2024-07-10 15:50:20.545942] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.178 [2024-07-10 15:50:20.545967] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.178 [2024-07-10 15:50:20.545997] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.439 [2024-07-10 15:50:20.548466] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.439 [2024-07-10 15:50:20.557492] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.439 [2024-07-10 15:50:20.557849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.439 [2024-07-10 15:50:20.558037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.439 [2024-07-10 15:50:20.558065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.439 [2024-07-10 15:50:20.558083] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.439 [2024-07-10 15:50:20.558268] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.439 [2024-07-10 15:50:20.558454] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.439 [2024-07-10 15:50:20.558480] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.439 [2024-07-10 15:50:20.558496] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.439 [2024-07-10 15:50:20.560599] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.439 [2024-07-10 15:50:20.570016] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.439 [2024-07-10 15:50:20.570372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.439 [2024-07-10 15:50:20.570527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.439 [2024-07-10 15:50:20.570557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.439 [2024-07-10 15:50:20.570575] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.439 [2024-07-10 15:50:20.570759] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.439 [2024-07-10 15:50:20.570911] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.439 [2024-07-10 15:50:20.570934] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.439 [2024-07-10 15:50:20.570950] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.439 [2024-07-10 15:50:20.573247] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.439 [2024-07-10 15:50:20.582620] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.439 [2024-07-10 15:50:20.582997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.439 [2024-07-10 15:50:20.583213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.439 [2024-07-10 15:50:20.583241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.439 [2024-07-10 15:50:20.583259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.439 [2024-07-10 15:50:20.583388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.439 [2024-07-10 15:50:20.583568] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.439 [2024-07-10 15:50:20.583593] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.439 [2024-07-10 15:50:20.583608] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.439 [2024-07-10 15:50:20.586082] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.439 [2024-07-10 15:50:20.595124] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.439 [2024-07-10 15:50:20.595482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.439 [2024-07-10 15:50:20.595742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.439 [2024-07-10 15:50:20.595768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.439 [2024-07-10 15:50:20.595784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.439 [2024-07-10 15:50:20.595975] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.439 [2024-07-10 15:50:20.596091] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.439 [2024-07-10 15:50:20.596115] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.439 [2024-07-10 15:50:20.596130] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.439 [2024-07-10 15:50:20.598399] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.439 [2024-07-10 15:50:20.607789] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.439 [2024-07-10 15:50:20.608137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.439 [2024-07-10 15:50:20.608280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.439 [2024-07-10 15:50:20.608308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.439 [2024-07-10 15:50:20.608326] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.439 [2024-07-10 15:50:20.608541] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.439 [2024-07-10 15:50:20.608711] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.439 [2024-07-10 15:50:20.608736] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.439 [2024-07-10 15:50:20.608751] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.439 [2024-07-10 15:50:20.611333] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.439 [2024-07-10 15:50:20.620536] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.439 [2024-07-10 15:50:20.620977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.439 [2024-07-10 15:50:20.621129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.439 [2024-07-10 15:50:20.621154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.439 [2024-07-10 15:50:20.621170] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.439 [2024-07-10 15:50:20.621347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.439 [2024-07-10 15:50:20.621553] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.439 [2024-07-10 15:50:20.621578] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.439 [2024-07-10 15:50:20.621594] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.439 [2024-07-10 15:50:20.623799] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.439 [2024-07-10 15:50:20.633465] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.439 [2024-07-10 15:50:20.633838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.439 [2024-07-10 15:50:20.634048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.439 [2024-07-10 15:50:20.634076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.439 [2024-07-10 15:50:20.634093] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.439 [2024-07-10 15:50:20.634276] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.439 [2024-07-10 15:50:20.634474] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.439 [2024-07-10 15:50:20.634499] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.439 [2024-07-10 15:50:20.634515] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.439 [2024-07-10 15:50:20.636734] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.439 [2024-07-10 15:50:20.646131] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.439 [2024-07-10 15:50:20.646465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.439 [2024-07-10 15:50:20.646672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.439 [2024-07-10 15:50:20.646700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.439 [2024-07-10 15:50:20.646718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.439 [2024-07-10 15:50:20.646884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.439 [2024-07-10 15:50:20.647052] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.439 [2024-07-10 15:50:20.647076] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.439 [2024-07-10 15:50:20.647092] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.439 [2024-07-10 15:50:20.649446] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.439 [2024-07-10 15:50:20.658792] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.439 [2024-07-10 15:50:20.659168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.439 [2024-07-10 15:50:20.659364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.439 [2024-07-10 15:50:20.659389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.439 [2024-07-10 15:50:20.659410] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.439 [2024-07-10 15:50:20.659644] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.440 [2024-07-10 15:50:20.659851] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.440 [2024-07-10 15:50:20.659875] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.440 [2024-07-10 15:50:20.659891] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.440 [2024-07-10 15:50:20.662094] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.440 [2024-07-10 15:50:20.671602] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.440 [2024-07-10 15:50:20.671989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.440 [2024-07-10 15:50:20.672170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.440 [2024-07-10 15:50:20.672198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.440 [2024-07-10 15:50:20.672216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.440 [2024-07-10 15:50:20.672399] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.440 [2024-07-10 15:50:20.672560] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.440 [2024-07-10 15:50:20.672584] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.440 [2024-07-10 15:50:20.672600] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.440 [2024-07-10 15:50:20.674983] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.440 [2024-07-10 15:50:20.684220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.440 [2024-07-10 15:50:20.684681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.440 [2024-07-10 15:50:20.684901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.440 [2024-07-10 15:50:20.684929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.440 [2024-07-10 15:50:20.684947] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.440 [2024-07-10 15:50:20.685148] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.440 [2024-07-10 15:50:20.685299] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.440 [2024-07-10 15:50:20.685323] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.440 [2024-07-10 15:50:20.685338] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.440 [2024-07-10 15:50:20.687622] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.440 [2024-07-10 15:50:20.696933] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.440 [2024-07-10 15:50:20.697264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.440 [2024-07-10 15:50:20.697449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.440 [2024-07-10 15:50:20.697479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.440 [2024-07-10 15:50:20.697496] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.440 [2024-07-10 15:50:20.697649] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.440 [2024-07-10 15:50:20.697783] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.440 [2024-07-10 15:50:20.697806] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.440 [2024-07-10 15:50:20.697822] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.440 [2024-07-10 15:50:20.700008] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.440 [2024-07-10 15:50:20.709436] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.440 [2024-07-10 15:50:20.709829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.440 [2024-07-10 15:50:20.709984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.440 [2024-07-10 15:50:20.710012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.440 [2024-07-10 15:50:20.710030] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.440 [2024-07-10 15:50:20.710176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.440 [2024-07-10 15:50:20.710309] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.440 [2024-07-10 15:50:20.710333] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.440 [2024-07-10 15:50:20.710349] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.440 [2024-07-10 15:50:20.712815] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.440 [2024-07-10 15:50:20.721978] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.440 [2024-07-10 15:50:20.722374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.440 [2024-07-10 15:50:20.722593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.440 [2024-07-10 15:50:20.722622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.440 [2024-07-10 15:50:20.722640] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.440 [2024-07-10 15:50:20.722769] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.440 [2024-07-10 15:50:20.722956] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.440 [2024-07-10 15:50:20.722979] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.440 [2024-07-10 15:50:20.722995] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.440 [2024-07-10 15:50:20.725307] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.440 [2024-07-10 15:50:20.734577] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.440 [2024-07-10 15:50:20.735009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.440 [2024-07-10 15:50:20.735196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.440 [2024-07-10 15:50:20.735224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.440 [2024-07-10 15:50:20.735241] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.440 [2024-07-10 15:50:20.735406] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.440 [2024-07-10 15:50:20.735608] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.440 [2024-07-10 15:50:20.735633] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.440 [2024-07-10 15:50:20.735648] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.440 [2024-07-10 15:50:20.737850] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.440 [2024-07-10 15:50:20.747127] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.440 [2024-07-10 15:50:20.747538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.440 [2024-07-10 15:50:20.747723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.440 [2024-07-10 15:50:20.747752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.440 [2024-07-10 15:50:20.747769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.440 [2024-07-10 15:50:20.747917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.440 [2024-07-10 15:50:20.748050] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.440 [2024-07-10 15:50:20.748073] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.440 [2024-07-10 15:50:20.748088] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.440 [2024-07-10 15:50:20.750445] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.440 [2024-07-10 15:50:20.759646] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.440 [2024-07-10 15:50:20.760048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.440 [2024-07-10 15:50:20.760207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.440 [2024-07-10 15:50:20.760235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.440 [2024-07-10 15:50:20.760253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.440 [2024-07-10 15:50:20.760448] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.440 [2024-07-10 15:50:20.760618] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.440 [2024-07-10 15:50:20.760642] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.440 [2024-07-10 15:50:20.760657] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.440 [2024-07-10 15:50:20.763058] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.440 [2024-07-10 15:50:20.772315] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.440 [2024-07-10 15:50:20.772693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.440 [2024-07-10 15:50:20.772879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.440 [2024-07-10 15:50:20.772908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.440 [2024-07-10 15:50:20.772926] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.440 [2024-07-10 15:50:20.773055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.440 [2024-07-10 15:50:20.773225] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.440 [2024-07-10 15:50:20.773253] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.440 [2024-07-10 15:50:20.773270] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.440 [2024-07-10 15:50:20.775482] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.440 [2024-07-10 15:50:20.784924] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.440 [2024-07-10 15:50:20.785311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.440 [2024-07-10 15:50:20.785494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.440 [2024-07-10 15:50:20.785524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.440 [2024-07-10 15:50:20.785542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.440 [2024-07-10 15:50:20.785724] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.441 [2024-07-10 15:50:20.785947] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.441 [2024-07-10 15:50:20.785971] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.441 [2024-07-10 15:50:20.785987] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.441 [2024-07-10 15:50:20.788331] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.441 [2024-07-10 15:50:20.797552] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.441 [2024-07-10 15:50:20.797861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.441 [2024-07-10 15:50:20.798043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.441 [2024-07-10 15:50:20.798072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.441 [2024-07-10 15:50:20.798090] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.441 [2024-07-10 15:50:20.798273] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.441 [2024-07-10 15:50:20.798454] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.441 [2024-07-10 15:50:20.798478] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.441 [2024-07-10 15:50:20.798494] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.441 [2024-07-10 15:50:20.800733] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.441 [2024-07-10 15:50:20.809964] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.441 [2024-07-10 15:50:20.810387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.441 [2024-07-10 15:50:20.810625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.441 [2024-07-10 15:50:20.810672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.441 [2024-07-10 15:50:20.810692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.441 [2024-07-10 15:50:20.810841] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.441 [2024-07-10 15:50:20.811030] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.441 [2024-07-10 15:50:20.811053] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.441 [2024-07-10 15:50:20.811075] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.701 [2024-07-10 15:50:20.813529] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.701 [2024-07-10 15:50:20.822691] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.701 [2024-07-10 15:50:20.823049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.701 [2024-07-10 15:50:20.823265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.701 [2024-07-10 15:50:20.823294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.701 [2024-07-10 15:50:20.823312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.701 [2024-07-10 15:50:20.823508] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.701 [2024-07-10 15:50:20.823625] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.701 [2024-07-10 15:50:20.823649] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.701 [2024-07-10 15:50:20.823664] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.701 [2024-07-10 15:50:20.826050] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.701 [2024-07-10 15:50:20.835285] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.701 [2024-07-10 15:50:20.835661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.701 [2024-07-10 15:50:20.835848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.701 [2024-07-10 15:50:20.835876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.701 [2024-07-10 15:50:20.835894] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.701 [2024-07-10 15:50:20.836023] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.701 [2024-07-10 15:50:20.836156] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.701 [2024-07-10 15:50:20.836179] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.701 [2024-07-10 15:50:20.836195] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.701 [2024-07-10 15:50:20.838610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.701 [2024-07-10 15:50:20.847842] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.701 [2024-07-10 15:50:20.848168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.701 [2024-07-10 15:50:20.848353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.702 [2024-07-10 15:50:20.848382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.702 [2024-07-10 15:50:20.848400] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.702 [2024-07-10 15:50:20.848611] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.702 [2024-07-10 15:50:20.848745] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.702 [2024-07-10 15:50:20.848769] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.702 [2024-07-10 15:50:20.848784] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.702 [2024-07-10 15:50:20.851029] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.702 [2024-07-10 15:50:20.860410] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.702 [2024-07-10 15:50:20.860811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.702 [2024-07-10 15:50:20.861018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.702 [2024-07-10 15:50:20.861046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.702 [2024-07-10 15:50:20.861064] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.702 [2024-07-10 15:50:20.861265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.702 [2024-07-10 15:50:20.861446] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.702 [2024-07-10 15:50:20.861470] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.702 [2024-07-10 15:50:20.861486] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.702 [2024-07-10 15:50:20.863905] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.702 [2024-07-10 15:50:20.873077] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.702 [2024-07-10 15:50:20.873418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.702 [2024-07-10 15:50:20.873608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.702 [2024-07-10 15:50:20.873637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.702 [2024-07-10 15:50:20.873654] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.702 [2024-07-10 15:50:20.873855] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.702 [2024-07-10 15:50:20.874024] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.702 [2024-07-10 15:50:20.874047] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.702 [2024-07-10 15:50:20.874063] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.702 [2024-07-10 15:50:20.876158] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.702 [2024-07-10 15:50:20.885849] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.702 [2024-07-10 15:50:20.886304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.702 [2024-07-10 15:50:20.886481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.702 [2024-07-10 15:50:20.886511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.702 [2024-07-10 15:50:20.886528] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.702 [2024-07-10 15:50:20.886712] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.702 [2024-07-10 15:50:20.886918] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.702 [2024-07-10 15:50:20.886942] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.702 [2024-07-10 15:50:20.886957] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.702 [2024-07-10 15:50:20.889141] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.702 [2024-07-10 15:50:20.898279] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.702 [2024-07-10 15:50:20.898695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.702 [2024-07-10 15:50:20.898939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.702 [2024-07-10 15:50:20.898967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.702 [2024-07-10 15:50:20.898985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.702 [2024-07-10 15:50:20.899132] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.702 [2024-07-10 15:50:20.899319] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.702 [2024-07-10 15:50:20.899342] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.702 [2024-07-10 15:50:20.899358] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.702 [2024-07-10 15:50:20.901784] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.702 [2024-07-10 15:50:20.911033] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.702 [2024-07-10 15:50:20.911431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.702 [2024-07-10 15:50:20.911610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.702 [2024-07-10 15:50:20.911638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.702 [2024-07-10 15:50:20.911656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.702 [2024-07-10 15:50:20.911803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.702 [2024-07-10 15:50:20.911954] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.702 [2024-07-10 15:50:20.911977] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.702 [2024-07-10 15:50:20.911993] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.702 [2024-07-10 15:50:20.914476] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.702 [2024-07-10 15:50:20.923679] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.702 [2024-07-10 15:50:20.924102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.702 [2024-07-10 15:50:20.924318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.702 [2024-07-10 15:50:20.924346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.702 [2024-07-10 15:50:20.924364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.702 [2024-07-10 15:50:20.924523] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.702 [2024-07-10 15:50:20.924704] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.702 [2024-07-10 15:50:20.924728] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.702 [2024-07-10 15:50:20.924744] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.702 [2024-07-10 15:50:20.926945] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.702 [2024-07-10 15:50:20.936325] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.702 [2024-07-10 15:50:20.936665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.702 [2024-07-10 15:50:20.936874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.702 [2024-07-10 15:50:20.936899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.702 [2024-07-10 15:50:20.936915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.702 [2024-07-10 15:50:20.937115] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.702 [2024-07-10 15:50:20.937249] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.702 [2024-07-10 15:50:20.937272] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.702 [2024-07-10 15:50:20.937287] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.702 [2024-07-10 15:50:20.939662] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.702 [2024-07-10 15:50:20.948991] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.702 [2024-07-10 15:50:20.949374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.702 [2024-07-10 15:50:20.949548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.702 [2024-07-10 15:50:20.949577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.702 [2024-07-10 15:50:20.949595] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.702 [2024-07-10 15:50:20.949760] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.702 [2024-07-10 15:50:20.949894] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.702 [2024-07-10 15:50:20.949917] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.702 [2024-07-10 15:50:20.949932] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.702 [2024-07-10 15:50:20.952207] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.702 [2024-07-10 15:50:20.961612] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.702 [2024-07-10 15:50:20.962115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.702 [2024-07-10 15:50:20.962326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.702 [2024-07-10 15:50:20.962355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.702 [2024-07-10 15:50:20.962371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.702 [2024-07-10 15:50:20.962541] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.702 [2024-07-10 15:50:20.962741] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.702 [2024-07-10 15:50:20.962764] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.702 [2024-07-10 15:50:20.962780] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.702 [2024-07-10 15:50:20.965105] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.702 [2024-07-10 15:50:20.974058] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.702 [2024-07-10 15:50:20.974437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.703 [2024-07-10 15:50:20.974631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.703 [2024-07-10 15:50:20.974661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.703 [2024-07-10 15:50:20.974678] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.703 [2024-07-10 15:50:20.974857] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.703 [2024-07-10 15:50:20.975080] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.703 [2024-07-10 15:50:20.975104] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.703 [2024-07-10 15:50:20.975119] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.703 [2024-07-10 15:50:20.977516] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.703 [2024-07-10 15:50:20.986538] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.703 [2024-07-10 15:50:20.986861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.703 [2024-07-10 15:50:20.987071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.703 [2024-07-10 15:50:20.987099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.703 [2024-07-10 15:50:20.987117] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.703 [2024-07-10 15:50:20.987300] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.703 [2024-07-10 15:50:20.987467] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.703 [2024-07-10 15:50:20.987492] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.703 [2024-07-10 15:50:20.987507] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.703 [2024-07-10 15:50:20.989853] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.703 [2024-07-10 15:50:20.998984] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.703 [2024-07-10 15:50:20.999368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.703 [2024-07-10 15:50:20.999565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.703 [2024-07-10 15:50:20.999592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.703 [2024-07-10 15:50:20.999608] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.703 [2024-07-10 15:50:20.999755] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.703 [2024-07-10 15:50:20.999937] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.703 [2024-07-10 15:50:20.999960] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.703 [2024-07-10 15:50:20.999975] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.703 [2024-07-10 15:50:21.002287] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.703 [2024-07-10 15:50:21.011564] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.703 [2024-07-10 15:50:21.011959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.703 [2024-07-10 15:50:21.012153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.703 [2024-07-10 15:50:21.012179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.703 [2024-07-10 15:50:21.012200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.703 [2024-07-10 15:50:21.012416] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.703 [2024-07-10 15:50:21.012579] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.703 [2024-07-10 15:50:21.012603] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.703 [2024-07-10 15:50:21.012619] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.703 [2024-07-10 15:50:21.014877] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.703 [2024-07-10 15:50:21.024197] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.703 [2024-07-10 15:50:21.024611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.703 [2024-07-10 15:50:21.024791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.703 [2024-07-10 15:50:21.024819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.703 [2024-07-10 15:50:21.024837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.703 [2024-07-10 15:50:21.025038] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.703 [2024-07-10 15:50:21.025189] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.703 [2024-07-10 15:50:21.025213] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.703 [2024-07-10 15:50:21.025229] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.703 [2024-07-10 15:50:21.027290] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.703 [2024-07-10 15:50:21.036869] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.703 [2024-07-10 15:50:21.037235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.703 [2024-07-10 15:50:21.037414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.703 [2024-07-10 15:50:21.037449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.703 [2024-07-10 15:50:21.037466] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.703 [2024-07-10 15:50:21.037594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.703 [2024-07-10 15:50:21.037762] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.703 [2024-07-10 15:50:21.037786] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.703 [2024-07-10 15:50:21.037801] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.703 [2024-07-10 15:50:21.040149] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.703 [2024-07-10 15:50:21.049471] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.703 [2024-07-10 15:50:21.049843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.703 [2024-07-10 15:50:21.050026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.703 [2024-07-10 15:50:21.050054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.703 [2024-07-10 15:50:21.050072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.703 [2024-07-10 15:50:21.050243] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.703 [2024-07-10 15:50:21.050376] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.703 [2024-07-10 15:50:21.050400] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.703 [2024-07-10 15:50:21.050415] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.703 [2024-07-10 15:50:21.053024] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.703 [2024-07-10 15:50:21.061996] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.703 [2024-07-10 15:50:21.062327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.703 [2024-07-10 15:50:21.062511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.703 [2024-07-10 15:50:21.062538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.703 [2024-07-10 15:50:21.062554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.703 [2024-07-10 15:50:21.062738] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.703 [2024-07-10 15:50:21.062871] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.703 [2024-07-10 15:50:21.062894] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.703 [2024-07-10 15:50:21.062909] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.703 [2024-07-10 15:50:21.065168] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.703 [2024-07-10 15:50:21.074583] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.703 [2024-07-10 15:50:21.074970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.703 [2024-07-10 15:50:21.075133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.703 [2024-07-10 15:50:21.075160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.703 [2024-07-10 15:50:21.075177] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.703 [2024-07-10 15:50:21.075331] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.703 [2024-07-10 15:50:21.075514] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.703 [2024-07-10 15:50:21.075538] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.703 [2024-07-10 15:50:21.075554] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.962 [2024-07-10 15:50:21.077736] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.962 [2024-07-10 15:50:21.087131] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.962 [2024-07-10 15:50:21.087477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.962 [2024-07-10 15:50:21.087623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.962 [2024-07-10 15:50:21.087652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.962 [2024-07-10 15:50:21.087669] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.962 [2024-07-10 15:50:21.087853] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.962 [2024-07-10 15:50:21.088010] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.962 [2024-07-10 15:50:21.088034] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.962 [2024-07-10 15:50:21.088050] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.962 [2024-07-10 15:50:21.090498] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.962 [2024-07-10 15:50:21.099544] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.962 [2024-07-10 15:50:21.099869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.962 [2024-07-10 15:50:21.100058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.962 [2024-07-10 15:50:21.100084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.962 [2024-07-10 15:50:21.100101] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.962 [2024-07-10 15:50:21.100248] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.963 [2024-07-10 15:50:21.100442] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.963 [2024-07-10 15:50:21.100467] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.963 [2024-07-10 15:50:21.100482] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.963 [2024-07-10 15:50:21.102886] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.963 [2024-07-10 15:50:21.112036] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.963 [2024-07-10 15:50:21.112489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.963 [2024-07-10 15:50:21.112648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.963 [2024-07-10 15:50:21.112675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.963 [2024-07-10 15:50:21.112691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.963 [2024-07-10 15:50:21.112925] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.963 [2024-07-10 15:50:21.113094] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.963 [2024-07-10 15:50:21.113117] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.963 [2024-07-10 15:50:21.113133] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.963 [2024-07-10 15:50:21.115392] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.963 [2024-07-10 15:50:21.124641] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.963 [2024-07-10 15:50:21.125026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.963 [2024-07-10 15:50:21.125206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.963 [2024-07-10 15:50:21.125234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.963 [2024-07-10 15:50:21.125252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.963 [2024-07-10 15:50:21.125400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.963 [2024-07-10 15:50:21.125597] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.963 [2024-07-10 15:50:21.125629] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.963 [2024-07-10 15:50:21.125646] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.963 [2024-07-10 15:50:21.127994] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.963 [2024-07-10 15:50:21.137166] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.963 [2024-07-10 15:50:21.137558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.963 [2024-07-10 15:50:21.137820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.963 [2024-07-10 15:50:21.137845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.963 [2024-07-10 15:50:21.137860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.963 [2024-07-10 15:50:21.138057] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.963 [2024-07-10 15:50:21.138217] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.963 [2024-07-10 15:50:21.138241] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.963 [2024-07-10 15:50:21.138257] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.963 [2024-07-10 15:50:21.140577] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.963 [2024-07-10 15:50:21.149740] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.963 [2024-07-10 15:50:21.150155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.963 [2024-07-10 15:50:21.150370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.963 [2024-07-10 15:50:21.150399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.963 [2024-07-10 15:50:21.150416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.963 [2024-07-10 15:50:21.150610] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.963 [2024-07-10 15:50:21.150743] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.963 [2024-07-10 15:50:21.150767] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.963 [2024-07-10 15:50:21.150782] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.963 [2024-07-10 15:50:21.152985] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.963 [2024-07-10 15:50:21.162116] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.963 [2024-07-10 15:50:21.162474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.963 [2024-07-10 15:50:21.162689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.963 [2024-07-10 15:50:21.162715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.963 [2024-07-10 15:50:21.162731] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.963 [2024-07-10 15:50:21.162949] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.963 [2024-07-10 15:50:21.163118] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.963 [2024-07-10 15:50:21.163141] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.963 [2024-07-10 15:50:21.163162] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.963 [2024-07-10 15:50:21.165421] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.963 [2024-07-10 15:50:21.174548] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.963 [2024-07-10 15:50:21.174931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.963 [2024-07-10 15:50:21.175088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.963 [2024-07-10 15:50:21.175117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.963 [2024-07-10 15:50:21.175134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.963 [2024-07-10 15:50:21.175300] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.963 [2024-07-10 15:50:21.175443] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.963 [2024-07-10 15:50:21.175468] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.963 [2024-07-10 15:50:21.175483] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.963 [2024-07-10 15:50:21.177737] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.963 [2024-07-10 15:50:21.187050] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.963 [2024-07-10 15:50:21.187442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.963 [2024-07-10 15:50:21.187660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.963 [2024-07-10 15:50:21.187686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.963 [2024-07-10 15:50:21.187716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.963 [2024-07-10 15:50:21.187872] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.963 [2024-07-10 15:50:21.187993] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.963 [2024-07-10 15:50:21.188017] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.963 [2024-07-10 15:50:21.188032] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.963 [2024-07-10 15:50:21.190501] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.963 [2024-07-10 15:50:21.199643] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.963 [2024-07-10 15:50:21.200048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.963 [2024-07-10 15:50:21.200233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.963 [2024-07-10 15:50:21.200261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.963 [2024-07-10 15:50:21.200279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.963 [2024-07-10 15:50:21.200436] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.963 [2024-07-10 15:50:21.200634] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.963 [2024-07-10 15:50:21.200657] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.963 [2024-07-10 15:50:21.200672] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.963 [2024-07-10 15:50:21.203091] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.963 [2024-07-10 15:50:21.212112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.963 [2024-07-10 15:50:21.212495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.963 [2024-07-10 15:50:21.212675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.963 [2024-07-10 15:50:21.212704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.963 [2024-07-10 15:50:21.212721] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.963 [2024-07-10 15:50:21.212922] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.963 [2024-07-10 15:50:21.213056] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.963 [2024-07-10 15:50:21.213079] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.963 [2024-07-10 15:50:21.213095] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.963 [2024-07-10 15:50:21.215394] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.963 [2024-07-10 15:50:21.224623] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.963 [2024-07-10 15:50:21.225036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.963 [2024-07-10 15:50:21.225252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.963 [2024-07-10 15:50:21.225281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.963 [2024-07-10 15:50:21.225298] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.963 [2024-07-10 15:50:21.225476] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.963 [2024-07-10 15:50:21.225663] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.964 [2024-07-10 15:50:21.225687] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.964 [2024-07-10 15:50:21.225702] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.964 [2024-07-10 15:50:21.227936] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.964 [2024-07-10 15:50:21.237027] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.964 [2024-07-10 15:50:21.237393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.964 [2024-07-10 15:50:21.237611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.964 [2024-07-10 15:50:21.237640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.964 [2024-07-10 15:50:21.237657] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.964 [2024-07-10 15:50:21.237876] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.964 [2024-07-10 15:50:21.238045] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.964 [2024-07-10 15:50:21.238069] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.964 [2024-07-10 15:50:21.238084] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.964 [2024-07-10 15:50:21.240380] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.964 [2024-07-10 15:50:21.249679] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.964 [2024-07-10 15:50:21.250114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.964 [2024-07-10 15:50:21.250298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.964 [2024-07-10 15:50:21.250327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.964 [2024-07-10 15:50:21.250344] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.964 [2024-07-10 15:50:21.250519] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.964 [2024-07-10 15:50:21.250708] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.964 [2024-07-10 15:50:21.250732] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.964 [2024-07-10 15:50:21.250747] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.964 [2024-07-10 15:50:21.252970] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.964 [2024-07-10 15:50:21.262198] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.964 [2024-07-10 15:50:21.262549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.964 [2024-07-10 15:50:21.262729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.964 [2024-07-10 15:50:21.262759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.964 [2024-07-10 15:50:21.262777] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.964 [2024-07-10 15:50:21.262906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.964 [2024-07-10 15:50:21.263058] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.964 [2024-07-10 15:50:21.263081] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.964 [2024-07-10 15:50:21.263097] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.964 [2024-07-10 15:50:21.265535] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.964 [2024-07-10 15:50:21.274748] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.964 [2024-07-10 15:50:21.275084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.964 [2024-07-10 15:50:21.275260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.964 [2024-07-10 15:50:21.275302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.964 [2024-07-10 15:50:21.275321] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.964 [2024-07-10 15:50:21.275560] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.964 [2024-07-10 15:50:21.275681] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.964 [2024-07-10 15:50:21.275718] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.964 [2024-07-10 15:50:21.275735] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.964 [2024-07-10 15:50:21.277867] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.964 [2024-07-10 15:50:21.287396] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.964 [2024-07-10 15:50:21.287739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.964 [2024-07-10 15:50:21.287931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.964 [2024-07-10 15:50:21.287960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.964 [2024-07-10 15:50:21.287978] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.964 [2024-07-10 15:50:21.288180] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.964 [2024-07-10 15:50:21.288276] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.964 [2024-07-10 15:50:21.288299] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.964 [2024-07-10 15:50:21.288314] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.964 [2024-07-10 15:50:21.290737] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.964 [2024-07-10 15:50:21.300120] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.964 [2024-07-10 15:50:21.300489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.964 [2024-07-10 15:50:21.300632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.964 [2024-07-10 15:50:21.300658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.964 [2024-07-10 15:50:21.300674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.964 [2024-07-10 15:50:21.300883] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.964 [2024-07-10 15:50:21.301071] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.964 [2024-07-10 15:50:21.301095] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.964 [2024-07-10 15:50:21.301110] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.964 [2024-07-10 15:50:21.303324] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.964 [2024-07-10 15:50:21.312773] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.964 [2024-07-10 15:50:21.313105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.964 [2024-07-10 15:50:21.313264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.964 [2024-07-10 15:50:21.313294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.964 [2024-07-10 15:50:21.313312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.964 [2024-07-10 15:50:21.313501] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.964 [2024-07-10 15:50:21.313670] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.964 [2024-07-10 15:50:21.313692] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.964 [2024-07-10 15:50:21.313706] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.964 [2024-07-10 15:50:21.316117] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:41.964 [2024-07-10 15:50:21.325515] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:41.964 [2024-07-10 15:50:21.325816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.964 [2024-07-10 15:50:21.325993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.964 [2024-07-10 15:50:21.326021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:41.964 [2024-07-10 15:50:21.326044] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:41.964 [2024-07-10 15:50:21.326228] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:41.964 [2024-07-10 15:50:21.326416] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:41.964 [2024-07-10 15:50:21.326450] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:41.964 [2024-07-10 15:50:21.326467] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:41.964 [2024-07-10 15:50:21.328723] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.224 [2024-07-10 15:50:21.338306] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.224 [2024-07-10 15:50:21.338711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.224 [2024-07-10 15:50:21.338865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.224 [2024-07-10 15:50:21.338895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.224 [2024-07-10 15:50:21.338913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.224 [2024-07-10 15:50:21.339043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.224 [2024-07-10 15:50:21.339213] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.224 [2024-07-10 15:50:21.339236] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.224 [2024-07-10 15:50:21.339252] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.224 [2024-07-10 15:50:21.341494] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.224 [2024-07-10 15:50:21.350789] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.224 [2024-07-10 15:50:21.351127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.224 [2024-07-10 15:50:21.351310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.224 [2024-07-10 15:50:21.351339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.224 [2024-07-10 15:50:21.351357] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.224 [2024-07-10 15:50:21.351550] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.224 [2024-07-10 15:50:21.351721] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.224 [2024-07-10 15:50:21.351745] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.224 [2024-07-10 15:50:21.351760] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.224 [2024-07-10 15:50:21.354105] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.224 [2024-07-10 15:50:21.363242] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.224 [2024-07-10 15:50:21.363625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.224 [2024-07-10 15:50:21.363961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.224 [2024-07-10 15:50:21.363990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.224 [2024-07-10 15:50:21.364008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.224 [2024-07-10 15:50:21.364178] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.224 [2024-07-10 15:50:21.364384] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.224 [2024-07-10 15:50:21.364407] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.224 [2024-07-10 15:50:21.364423] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.224 [2024-07-10 15:50:21.366760] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.224 [2024-07-10 15:50:21.376113] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.224 [2024-07-10 15:50:21.376468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.224 [2024-07-10 15:50:21.376686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.224 [2024-07-10 15:50:21.376715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.224 [2024-07-10 15:50:21.376733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.224 [2024-07-10 15:50:21.376898] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.224 [2024-07-10 15:50:21.377067] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.224 [2024-07-10 15:50:21.377090] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.224 [2024-07-10 15:50:21.377106] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.224 [2024-07-10 15:50:21.379554] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.224 [2024-07-10 15:50:21.388843] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.224 [2024-07-10 15:50:21.389201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.224 [2024-07-10 15:50:21.389381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.224 [2024-07-10 15:50:21.389410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.224 [2024-07-10 15:50:21.389437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.224 [2024-07-10 15:50:21.389623] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.224 [2024-07-10 15:50:21.389810] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.224 [2024-07-10 15:50:21.389833] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.224 [2024-07-10 15:50:21.389849] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.224 [2024-07-10 15:50:21.392373] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.224 [2024-07-10 15:50:21.401354] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.224 [2024-07-10 15:50:21.401717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.224 [2024-07-10 15:50:21.401890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.224 [2024-07-10 15:50:21.401918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.224 [2024-07-10 15:50:21.401936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.224 [2024-07-10 15:50:21.402120] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.224 [2024-07-10 15:50:21.402294] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.224 [2024-07-10 15:50:21.402318] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.224 [2024-07-10 15:50:21.402334] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.224 [2024-07-10 15:50:21.404474] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.224 [2024-07-10 15:50:21.413917] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.224 [2024-07-10 15:50:21.414236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.224 [2024-07-10 15:50:21.414381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.224 [2024-07-10 15:50:21.414409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.224 [2024-07-10 15:50:21.414435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.224 [2024-07-10 15:50:21.414567] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.224 [2024-07-10 15:50:21.414737] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.224 [2024-07-10 15:50:21.414760] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.224 [2024-07-10 15:50:21.414776] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.224 [2024-07-10 15:50:21.416940] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.224 [2024-07-10 15:50:21.426507] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.224 [2024-07-10 15:50:21.426902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.224 [2024-07-10 15:50:21.427113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.224 [2024-07-10 15:50:21.427140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.224 [2024-07-10 15:50:21.427158] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.224 [2024-07-10 15:50:21.427359] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.224 [2024-07-10 15:50:21.427573] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.224 [2024-07-10 15:50:21.427597] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.224 [2024-07-10 15:50:21.427613] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.224 [2024-07-10 15:50:21.429958] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.224 [2024-07-10 15:50:21.439080] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.224 [2024-07-10 15:50:21.439493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.224 [2024-07-10 15:50:21.439701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.224 [2024-07-10 15:50:21.439750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.224 [2024-07-10 15:50:21.439768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.224 [2024-07-10 15:50:21.439951] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.225 [2024-07-10 15:50:21.440102] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.225 [2024-07-10 15:50:21.440132] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.225 [2024-07-10 15:50:21.440148] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.225 [2024-07-10 15:50:21.442684] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.225 [2024-07-10 15:50:21.451678] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.225 [2024-07-10 15:50:21.452056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.225 [2024-07-10 15:50:21.452263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.225 [2024-07-10 15:50:21.452291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.225 [2024-07-10 15:50:21.452309] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.225 [2024-07-10 15:50:21.452448] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.225 [2024-07-10 15:50:21.452564] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.225 [2024-07-10 15:50:21.452587] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.225 [2024-07-10 15:50:21.452603] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.225 [2024-07-10 15:50:21.454787] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.225 [2024-07-10 15:50:21.464214] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.225 [2024-07-10 15:50:21.464550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.225 [2024-07-10 15:50:21.464735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.225 [2024-07-10 15:50:21.464783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.225 [2024-07-10 15:50:21.464802] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.225 [2024-07-10 15:50:21.464985] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.225 [2024-07-10 15:50:21.465153] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.225 [2024-07-10 15:50:21.465177] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.225 [2024-07-10 15:50:21.465193] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.225 [2024-07-10 15:50:21.467585] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.225 [2024-07-10 15:50:21.476884] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.225 [2024-07-10 15:50:21.477297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.225 [2024-07-10 15:50:21.477505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.225 [2024-07-10 15:50:21.477531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.225 [2024-07-10 15:50:21.477546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.225 [2024-07-10 15:50:21.477727] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.225 [2024-07-10 15:50:21.477890] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.225 [2024-07-10 15:50:21.477913] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.225 [2024-07-10 15:50:21.477934] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.225 [2024-07-10 15:50:21.480143] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.225 [2024-07-10 15:50:21.489603] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.225 [2024-07-10 15:50:21.490030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.225 [2024-07-10 15:50:21.490238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.225 [2024-07-10 15:50:21.490263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.225 [2024-07-10 15:50:21.490279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.225 [2024-07-10 15:50:21.490449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.225 [2024-07-10 15:50:21.490593] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.225 [2024-07-10 15:50:21.490618] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.225 [2024-07-10 15:50:21.490634] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.225 [2024-07-10 15:50:21.492889] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.225 [2024-07-10 15:50:21.502132] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.225 [2024-07-10 15:50:21.502450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.225 [2024-07-10 15:50:21.502636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.225 [2024-07-10 15:50:21.502666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.225 [2024-07-10 15:50:21.502684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.225 [2024-07-10 15:50:21.502832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.225 [2024-07-10 15:50:21.503001] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.225 [2024-07-10 15:50:21.503025] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.225 [2024-07-10 15:50:21.503041] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.225 [2024-07-10 15:50:21.505353] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.225 [2024-07-10 15:50:21.514637] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.225 [2024-07-10 15:50:21.515020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.225 [2024-07-10 15:50:21.515163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.225 [2024-07-10 15:50:21.515192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.225 [2024-07-10 15:50:21.515209] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.225 [2024-07-10 15:50:21.515374] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.225 [2024-07-10 15:50:21.515587] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.225 [2024-07-10 15:50:21.515611] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.225 [2024-07-10 15:50:21.515627] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.225 [2024-07-10 15:50:21.517851] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.225 [2024-07-10 15:50:21.527059] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.225 [2024-07-10 15:50:21.527398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.225 [2024-07-10 15:50:21.527566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.225 [2024-07-10 15:50:21.527597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.225 [2024-07-10 15:50:21.527615] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.225 [2024-07-10 15:50:21.527780] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.225 [2024-07-10 15:50:21.527950] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.225 [2024-07-10 15:50:21.527974] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.225 [2024-07-10 15:50:21.527989] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.225 [2024-07-10 15:50:21.530270] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.225 [2024-07-10 15:50:21.539713] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.225 [2024-07-10 15:50:21.540040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.225 [2024-07-10 15:50:21.540307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.225 [2024-07-10 15:50:21.540355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.225 [2024-07-10 15:50:21.540373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.225 [2024-07-10 15:50:21.540558] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.225 [2024-07-10 15:50:21.540746] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.225 [2024-07-10 15:50:21.540770] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.225 [2024-07-10 15:50:21.540786] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.225 [2024-07-10 15:50:21.543347] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.225 [2024-07-10 15:50:21.552195] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.225 [2024-07-10 15:50:21.552547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.225 [2024-07-10 15:50:21.552738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.225 [2024-07-10 15:50:21.552766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.225 [2024-07-10 15:50:21.552784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.225 [2024-07-10 15:50:21.552967] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.225 [2024-07-10 15:50:21.553100] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.225 [2024-07-10 15:50:21.553123] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.225 [2024-07-10 15:50:21.553138] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.225 [2024-07-10 15:50:21.555419] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.225 [2024-07-10 15:50:21.564815] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.225 [2024-07-10 15:50:21.565225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.225 [2024-07-10 15:50:21.565380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.225 [2024-07-10 15:50:21.565406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.225 [2024-07-10 15:50:21.565440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.225 [2024-07-10 15:50:21.565573] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.226 [2024-07-10 15:50:21.565735] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.226 [2024-07-10 15:50:21.565759] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.226 [2024-07-10 15:50:21.565775] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.226 [2024-07-10 15:50:21.568013] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.226 [2024-07-10 15:50:21.577368] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.226 [2024-07-10 15:50:21.577728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.226 [2024-07-10 15:50:21.577915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.226 [2024-07-10 15:50:21.577948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.226 [2024-07-10 15:50:21.577984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.226 [2024-07-10 15:50:21.578167] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.226 [2024-07-10 15:50:21.578336] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.226 [2024-07-10 15:50:21.578360] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.226 [2024-07-10 15:50:21.578376] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.226 [2024-07-10 15:50:21.580682] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.226 [2024-07-10 15:50:21.589887] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.226 [2024-07-10 15:50:21.590281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.226 [2024-07-10 15:50:21.590483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.226 [2024-07-10 15:50:21.590512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.226 [2024-07-10 15:50:21.590530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.226 [2024-07-10 15:50:21.590659] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.226 [2024-07-10 15:50:21.590810] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.226 [2024-07-10 15:50:21.590833] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.226 [2024-07-10 15:50:21.590848] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.226 [2024-07-10 15:50:21.593208] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.485 [2024-07-10 15:50:21.602419] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.485 [2024-07-10 15:50:21.602825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.485 [2024-07-10 15:50:21.603001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.485 [2024-07-10 15:50:21.603044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.485 [2024-07-10 15:50:21.603062] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.485 [2024-07-10 15:50:21.603184] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.485 [2024-07-10 15:50:21.603388] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.485 [2024-07-10 15:50:21.603414] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.485 [2024-07-10 15:50:21.603441] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.485 [2024-07-10 15:50:21.605965] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.485 [2024-07-10 15:50:21.615151] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.485 [2024-07-10 15:50:21.615473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.485 [2024-07-10 15:50:21.615634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.485 [2024-07-10 15:50:21.615662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.485 [2024-07-10 15:50:21.615680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.485 [2024-07-10 15:50:21.615845] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.485 [2024-07-10 15:50:21.616014] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.485 [2024-07-10 15:50:21.616038] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.485 [2024-07-10 15:50:21.616054] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.485 [2024-07-10 15:50:21.618329] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.485 [2024-07-10 15:50:21.627807] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.485 [2024-07-10 15:50:21.628165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.485 [2024-07-10 15:50:21.628368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.485 [2024-07-10 15:50:21.628393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.485 [2024-07-10 15:50:21.628409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.485 [2024-07-10 15:50:21.628607] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.485 [2024-07-10 15:50:21.628806] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.485 [2024-07-10 15:50:21.628830] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.485 [2024-07-10 15:50:21.628846] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.485 [2024-07-10 15:50:21.631210] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.485 [2024-07-10 15:50:21.640279] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.485 [2024-07-10 15:50:21.640695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.485 [2024-07-10 15:50:21.640859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.485 [2024-07-10 15:50:21.640893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.485 [2024-07-10 15:50:21.640911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.486 [2024-07-10 15:50:21.641094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.486 [2024-07-10 15:50:21.641263] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.486 [2024-07-10 15:50:21.641286] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.486 [2024-07-10 15:50:21.641301] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.486 [2024-07-10 15:50:21.643658] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.486 [2024-07-10 15:50:21.652788] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.486 [2024-07-10 15:50:21.653190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.486 [2024-07-10 15:50:21.653364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.486 [2024-07-10 15:50:21.653389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.486 [2024-07-10 15:50:21.653405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.486 [2024-07-10 15:50:21.653548] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.486 [2024-07-10 15:50:21.653748] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.486 [2024-07-10 15:50:21.653773] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.486 [2024-07-10 15:50:21.653788] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.486 [2024-07-10 15:50:21.656061] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.486 [2024-07-10 15:50:21.665381] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.486 [2024-07-10 15:50:21.665780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.486 [2024-07-10 15:50:21.665977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.486 [2024-07-10 15:50:21.666025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.486 [2024-07-10 15:50:21.666043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.486 [2024-07-10 15:50:21.666172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.486 [2024-07-10 15:50:21.666341] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.486 [2024-07-10 15:50:21.666364] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.486 [2024-07-10 15:50:21.666380] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.486 [2024-07-10 15:50:21.668609] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.486 [2024-07-10 15:50:21.677956] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.486 [2024-07-10 15:50:21.678403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.486 [2024-07-10 15:50:21.678641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.486 [2024-07-10 15:50:21.678688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.486 [2024-07-10 15:50:21.678712] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.486 [2024-07-10 15:50:21.678895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.486 [2024-07-10 15:50:21.679046] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.486 [2024-07-10 15:50:21.679069] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.486 [2024-07-10 15:50:21.679085] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.486 [2024-07-10 15:50:21.681482] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.486 [2024-07-10 15:50:21.690536] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.486 [2024-07-10 15:50:21.690893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.486 [2024-07-10 15:50:21.691039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.486 [2024-07-10 15:50:21.691066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.486 [2024-07-10 15:50:21.691100] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.486 [2024-07-10 15:50:21.691229] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.486 [2024-07-10 15:50:21.691416] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.486 [2024-07-10 15:50:21.691450] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.486 [2024-07-10 15:50:21.691467] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.486 [2024-07-10 15:50:21.693705] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.486 [2024-07-10 15:50:21.703015] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.486 [2024-07-10 15:50:21.703453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.486 [2024-07-10 15:50:21.703610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.486 [2024-07-10 15:50:21.703638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.486 [2024-07-10 15:50:21.703656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.486 [2024-07-10 15:50:21.703803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.486 [2024-07-10 15:50:21.703990] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.486 [2024-07-10 15:50:21.704013] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.486 [2024-07-10 15:50:21.704029] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.486 [2024-07-10 15:50:21.706412] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.486 [2024-07-10 15:50:21.715702] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.486 [2024-07-10 15:50:21.716060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.486 [2024-07-10 15:50:21.716194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.486 [2024-07-10 15:50:21.716220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.486 [2024-07-10 15:50:21.716236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.486 [2024-07-10 15:50:21.716440] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.486 [2024-07-10 15:50:21.716599] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.486 [2024-07-10 15:50:21.716619] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.486 [2024-07-10 15:50:21.716632] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.486 [2024-07-10 15:50:21.719038] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.486 [2024-07-10 15:50:21.728399] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.486 [2024-07-10 15:50:21.728745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.486 [2024-07-10 15:50:21.728890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.486 [2024-07-10 15:50:21.728916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.486 [2024-07-10 15:50:21.728948] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.486 [2024-07-10 15:50:21.729131] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.486 [2024-07-10 15:50:21.729318] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.486 [2024-07-10 15:50:21.729342] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.486 [2024-07-10 15:50:21.729358] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.486 [2024-07-10 15:50:21.731658] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.486 [2024-07-10 15:50:21.741014] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.486 [2024-07-10 15:50:21.741354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.486 [2024-07-10 15:50:21.741530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.486 [2024-07-10 15:50:21.741561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.486 [2024-07-10 15:50:21.741578] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.486 [2024-07-10 15:50:21.741726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.486 [2024-07-10 15:50:21.741912] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.486 [2024-07-10 15:50:21.741936] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.486 [2024-07-10 15:50:21.741952] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.486 [2024-07-10 15:50:21.744374] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.486 [2024-07-10 15:50:21.753700] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.486 [2024-07-10 15:50:21.754048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.486 [2024-07-10 15:50:21.754237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.486 [2024-07-10 15:50:21.754265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.486 [2024-07-10 15:50:21.754281] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.486 [2024-07-10 15:50:21.754503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.486 [2024-07-10 15:50:21.754679] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.487 [2024-07-10 15:50:21.754703] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.487 [2024-07-10 15:50:21.754719] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.487 [2024-07-10 15:50:21.756957] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.487 [2024-07-10 15:50:21.766176] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.487 [2024-07-10 15:50:21.766499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.487 [2024-07-10 15:50:21.766671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.487 [2024-07-10 15:50:21.766714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.487 [2024-07-10 15:50:21.766732] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.487 [2024-07-10 15:50:21.766898] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.487 [2024-07-10 15:50:21.767085] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.487 [2024-07-10 15:50:21.767109] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.487 [2024-07-10 15:50:21.767124] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.487 [2024-07-10 15:50:21.769463] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.487 [2024-07-10 15:50:21.778804] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.487 [2024-07-10 15:50:21.779223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.487 [2024-07-10 15:50:21.779399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.487 [2024-07-10 15:50:21.779441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.487 [2024-07-10 15:50:21.779462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.487 [2024-07-10 15:50:21.779663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.487 [2024-07-10 15:50:21.779850] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.487 [2024-07-10 15:50:21.779874] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.487 [2024-07-10 15:50:21.779890] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.487 [2024-07-10 15:50:21.782074] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.487 [2024-07-10 15:50:21.791532] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.487 [2024-07-10 15:50:21.791876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.487 [2024-07-10 15:50:21.792208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.487 [2024-07-10 15:50:21.792260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.487 [2024-07-10 15:50:21.792278] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.487 [2024-07-10 15:50:21.792473] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.487 [2024-07-10 15:50:21.792607] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.487 [2024-07-10 15:50:21.792636] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.487 [2024-07-10 15:50:21.792653] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.487 [2024-07-10 15:50:21.794946] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.487 [2024-07-10 15:50:21.804043] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.487 [2024-07-10 15:50:21.804348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.487 [2024-07-10 15:50:21.804568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.487 [2024-07-10 15:50:21.804595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.487 [2024-07-10 15:50:21.804625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.487 [2024-07-10 15:50:21.804768] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.487 [2024-07-10 15:50:21.804938] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.487 [2024-07-10 15:50:21.804962] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.487 [2024-07-10 15:50:21.804977] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.487 [2024-07-10 15:50:21.807236] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.487 [2024-07-10 15:50:21.816680] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.487 [2024-07-10 15:50:21.817073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.487 [2024-07-10 15:50:21.817298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.487 [2024-07-10 15:50:21.817356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.487 [2024-07-10 15:50:21.817374] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.487 [2024-07-10 15:50:21.817515] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.487 [2024-07-10 15:50:21.817703] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.487 [2024-07-10 15:50:21.817727] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.487 [2024-07-10 15:50:21.817742] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.487 [2024-07-10 15:50:21.820193] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.487 [2024-07-10 15:50:21.829412] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.487 [2024-07-10 15:50:21.829765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.487 [2024-07-10 15:50:21.829972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.487 [2024-07-10 15:50:21.830000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.487 [2024-07-10 15:50:21.830018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.487 [2024-07-10 15:50:21.830165] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.487 [2024-07-10 15:50:21.830316] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.487 [2024-07-10 15:50:21.830339] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.487 [2024-07-10 15:50:21.830361] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.487 [2024-07-10 15:50:21.832608] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.487 [2024-07-10 15:50:21.842073] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.487 [2024-07-10 15:50:21.842449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.487 [2024-07-10 15:50:21.842624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.487 [2024-07-10 15:50:21.842658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.487 [2024-07-10 15:50:21.842692] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.487 [2024-07-10 15:50:21.842893] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.487 [2024-07-10 15:50:21.843080] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.487 [2024-07-10 15:50:21.843103] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.487 [2024-07-10 15:50:21.843119] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.487 [2024-07-10 15:50:21.845341] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.487 [2024-07-10 15:50:21.854789] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.487 [2024-07-10 15:50:21.855137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.487 [2024-07-10 15:50:21.855294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.487 [2024-07-10 15:50:21.855322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.487 [2024-07-10 15:50:21.855340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.487 [2024-07-10 15:50:21.855534] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.487 [2024-07-10 15:50:21.855723] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.487 [2024-07-10 15:50:21.855747] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.487 [2024-07-10 15:50:21.855762] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.487 [2024-07-10 15:50:21.858228] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.746 [2024-07-10 15:50:21.867292] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.746 [2024-07-10 15:50:21.867745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-07-10 15:50:21.868002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.746 [2024-07-10 15:50:21.868057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.746 [2024-07-10 15:50:21.868075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.746 [2024-07-10 15:50:21.868240] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.746 [2024-07-10 15:50:21.868436] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.746 [2024-07-10 15:50:21.868460] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.746 [2024-07-10 15:50:21.868476] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.747 [2024-07-10 15:50:21.870876] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.747 [2024-07-10 15:50:21.879958] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.747 [2024-07-10 15:50:21.880351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-07-10 15:50:21.880534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-07-10 15:50:21.880563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.747 [2024-07-10 15:50:21.880581] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.747 [2024-07-10 15:50:21.880711] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.747 [2024-07-10 15:50:21.880897] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.747 [2024-07-10 15:50:21.880920] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.747 [2024-07-10 15:50:21.880936] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.747 [2024-07-10 15:50:21.883487] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.747 [2024-07-10 15:50:21.892494] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.747 [2024-07-10 15:50:21.892986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-07-10 15:50:21.893337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-07-10 15:50:21.893387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.747 [2024-07-10 15:50:21.893404] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.747 [2024-07-10 15:50:21.893580] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.747 [2024-07-10 15:50:21.893768] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.747 [2024-07-10 15:50:21.893792] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.747 [2024-07-10 15:50:21.893808] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.747 [2024-07-10 15:50:21.896045] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.747 [2024-07-10 15:50:21.905026] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.747 [2024-07-10 15:50:21.905456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-07-10 15:50:21.905624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-07-10 15:50:21.905649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.747 [2024-07-10 15:50:21.905665] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.747 [2024-07-10 15:50:21.905859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.747 [2024-07-10 15:50:21.906084] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.747 [2024-07-10 15:50:21.906107] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.747 [2024-07-10 15:50:21.906123] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.747 [2024-07-10 15:50:21.908362] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.747 [2024-07-10 15:50:21.917715] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.747 [2024-07-10 15:50:21.918107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-07-10 15:50:21.918288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-07-10 15:50:21.918316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.747 [2024-07-10 15:50:21.918334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.747 [2024-07-10 15:50:21.918511] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.747 [2024-07-10 15:50:21.918681] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.747 [2024-07-10 15:50:21.918705] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.747 [2024-07-10 15:50:21.918720] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.747 [2024-07-10 15:50:21.921136] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.747 [2024-07-10 15:50:21.930181] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.747 [2024-07-10 15:50:21.930468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-07-10 15:50:21.930796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-07-10 15:50:21.930843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.747 [2024-07-10 15:50:21.930861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.747 [2024-07-10 15:50:21.930990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.747 [2024-07-10 15:50:21.931159] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.747 [2024-07-10 15:50:21.931182] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.747 [2024-07-10 15:50:21.931198] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.747 [2024-07-10 15:50:21.933699] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.747 [2024-07-10 15:50:21.942607] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.747 [2024-07-10 15:50:21.942950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-07-10 15:50:21.943234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-07-10 15:50:21.943286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.747 [2024-07-10 15:50:21.943304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.747 [2024-07-10 15:50:21.943499] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.747 [2024-07-10 15:50:21.943595] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.747 [2024-07-10 15:50:21.943618] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.747 [2024-07-10 15:50:21.943633] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.747 [2024-07-10 15:50:21.945957] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.747 [2024-07-10 15:50:21.955172] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.747 [2024-07-10 15:50:21.955520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-07-10 15:50:21.955698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-07-10 15:50:21.955742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.747 [2024-07-10 15:50:21.955759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.747 [2024-07-10 15:50:21.955961] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.747 [2024-07-10 15:50:21.956112] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.747 [2024-07-10 15:50:21.956135] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.747 [2024-07-10 15:50:21.956151] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.747 [2024-07-10 15:50:21.958411] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.747 [2024-07-10 15:50:21.967715] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.747 [2024-07-10 15:50:21.968040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-07-10 15:50:21.968215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-07-10 15:50:21.968240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.747 [2024-07-10 15:50:21.968256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.747 [2024-07-10 15:50:21.968451] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.747 [2024-07-10 15:50:21.968603] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.747 [2024-07-10 15:50:21.968627] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.747 [2024-07-10 15:50:21.968642] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.747 [2024-07-10 15:50:21.970719] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.747 [2024-07-10 15:50:21.980384] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.747 [2024-07-10 15:50:21.980789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-07-10 15:50:21.981049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-07-10 15:50:21.981088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.747 [2024-07-10 15:50:21.981104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.747 [2024-07-10 15:50:21.981256] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.747 [2024-07-10 15:50:21.981374] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.747 [2024-07-10 15:50:21.981397] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.747 [2024-07-10 15:50:21.981413] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.747 [2024-07-10 15:50:21.983763] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.747 [2024-07-10 15:50:21.992842] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.747 [2024-07-10 15:50:21.993350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-07-10 15:50:21.993565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.747 [2024-07-10 15:50:21.993594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.747 [2024-07-10 15:50:21.993617] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.747 [2024-07-10 15:50:21.993783] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.747 [2024-07-10 15:50:21.993952] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.747 [2024-07-10 15:50:21.993975] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.747 [2024-07-10 15:50:21.993991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.747 [2024-07-10 15:50:21.996215] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.747 [2024-07-10 15:50:22.005288] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.748 [2024-07-10 15:50:22.005636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-07-10 15:50:22.005826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-07-10 15:50:22.005852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.748 [2024-07-10 15:50:22.005868] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.748 [2024-07-10 15:50:22.006065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.748 [2024-07-10 15:50:22.006209] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.748 [2024-07-10 15:50:22.006232] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.748 [2024-07-10 15:50:22.006248] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.748 [2024-07-10 15:50:22.008549] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.748 [2024-07-10 15:50:22.017843] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.748 [2024-07-10 15:50:22.018166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-07-10 15:50:22.018322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-07-10 15:50:22.018363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.748 [2024-07-10 15:50:22.018379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.748 [2024-07-10 15:50:22.018552] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.748 [2024-07-10 15:50:22.018669] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.748 [2024-07-10 15:50:22.018692] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.748 [2024-07-10 15:50:22.018708] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.748 [2024-07-10 15:50:22.021091] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.748 [2024-07-10 15:50:22.030564] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.748 [2024-07-10 15:50:22.030871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-07-10 15:50:22.031025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-07-10 15:50:22.031054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.748 [2024-07-10 15:50:22.031072] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.748 [2024-07-10 15:50:22.031244] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.748 [2024-07-10 15:50:22.031413] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.748 [2024-07-10 15:50:22.031449] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.748 [2024-07-10 15:50:22.031466] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.748 [2024-07-10 15:50:22.033776] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.748 [2024-07-10 15:50:22.043147] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.748 [2024-07-10 15:50:22.043507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-07-10 15:50:22.043714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-07-10 15:50:22.043742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.748 [2024-07-10 15:50:22.043760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.748 [2024-07-10 15:50:22.043925] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.748 [2024-07-10 15:50:22.044112] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.748 [2024-07-10 15:50:22.044135] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.748 [2024-07-10 15:50:22.044151] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.748 [2024-07-10 15:50:22.046626] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.748 [2024-07-10 15:50:22.055579] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.748 [2024-07-10 15:50:22.056090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-07-10 15:50:22.056266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-07-10 15:50:22.056294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.748 [2024-07-10 15:50:22.056312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.748 [2024-07-10 15:50:22.056487] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.748 [2024-07-10 15:50:22.056657] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.748 [2024-07-10 15:50:22.056681] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.748 [2024-07-10 15:50:22.056696] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.748 [2024-07-10 15:50:22.059096] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.748 [2024-07-10 15:50:22.068300] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.748 [2024-07-10 15:50:22.068607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-07-10 15:50:22.068788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-07-10 15:50:22.068816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.748 [2024-07-10 15:50:22.068834] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.748 [2024-07-10 15:50:22.068964] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.748 [2024-07-10 15:50:22.069083] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.748 [2024-07-10 15:50:22.069107] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.748 [2024-07-10 15:50:22.069123] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.748 [2024-07-10 15:50:22.071328] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.748 [2024-07-10 15:50:22.080911] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.748 [2024-07-10 15:50:22.081228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-07-10 15:50:22.081404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-07-10 15:50:22.081442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.748 [2024-07-10 15:50:22.081463] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.748 [2024-07-10 15:50:22.081647] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.748 [2024-07-10 15:50:22.081834] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.748 [2024-07-10 15:50:22.081857] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.748 [2024-07-10 15:50:22.081873] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.748 [2024-07-10 15:50:22.084092] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.748 [2024-07-10 15:50:22.093521] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.748 [2024-07-10 15:50:22.093843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-07-10 15:50:22.094143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-07-10 15:50:22.094203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.748 [2024-07-10 15:50:22.094220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.748 [2024-07-10 15:50:22.094385] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.748 [2024-07-10 15:50:22.094600] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.748 [2024-07-10 15:50:22.094625] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.748 [2024-07-10 15:50:22.094640] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.748 [2024-07-10 15:50:22.096969] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.748 [2024-07-10 15:50:22.106112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.748 [2024-07-10 15:50:22.106489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-07-10 15:50:22.106696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-07-10 15:50:22.106724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.748 [2024-07-10 15:50:22.106742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.748 [2024-07-10 15:50:22.106853] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.748 [2024-07-10 15:50:22.107022] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.748 [2024-07-10 15:50:22.107050] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.748 [2024-07-10 15:50:22.107066] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:42.748 [2024-07-10 15:50:22.109381] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:42.748 [2024-07-10 15:50:22.118966] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:42.748 [2024-07-10 15:50:22.119334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-07-10 15:50:22.119518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:42.748 [2024-07-10 15:50:22.119553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:42.748 [2024-07-10 15:50:22.119583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:42.748 [2024-07-10 15:50:22.119751] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:42.748 [2024-07-10 15:50:22.119916] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:42.748 [2024-07-10 15:50:22.119941] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:42.748 [2024-07-10 15:50:22.119956] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.007 [2024-07-10 15:50:22.122449] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.007 [2024-07-10 15:50:22.131415] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.007 [2024-07-10 15:50:22.131955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-10 15:50:22.132180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-10 15:50:22.132206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.007 [2024-07-10 15:50:22.132221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.007 [2024-07-10 15:50:22.132408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.007 [2024-07-10 15:50:22.132609] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.007 [2024-07-10 15:50:22.132633] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.007 [2024-07-10 15:50:22.132649] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.007 [2024-07-10 15:50:22.134854] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.007 [2024-07-10 15:50:22.143939] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.007 [2024-07-10 15:50:22.144383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-10 15:50:22.144597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.007 [2024-07-10 15:50:22.144626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.007 [2024-07-10 15:50:22.144643] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.007 [2024-07-10 15:50:22.144845] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.007 [2024-07-10 15:50:22.144996] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.007 [2024-07-10 15:50:22.145020] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.007 [2024-07-10 15:50:22.145041] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.007 [2024-07-10 15:50:22.147486] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.007 [2024-07-10 15:50:22.156504] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.007 [2024-07-10 15:50:22.156903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-10 15:50:22.157057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-10 15:50:22.157085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.008 [2024-07-10 15:50:22.157103] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.008 [2024-07-10 15:50:22.157232] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.008 [2024-07-10 15:50:22.157419] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.008 [2024-07-10 15:50:22.157454] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.008 [2024-07-10 15:50:22.157471] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.008 [2024-07-10 15:50:22.159530] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.008 [2024-07-10 15:50:22.169116] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.008 [2024-07-10 15:50:22.169446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-10 15:50:22.169630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-10 15:50:22.169658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.008 [2024-07-10 15:50:22.169676] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.008 [2024-07-10 15:50:22.169841] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.008 [2024-07-10 15:50:22.170028] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.008 [2024-07-10 15:50:22.170051] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.008 [2024-07-10 15:50:22.170067] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.008 [2024-07-10 15:50:22.172288] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.008 [2024-07-10 15:50:22.181773] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.008 [2024-07-10 15:50:22.182126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-10 15:50:22.182295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-10 15:50:22.182321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.008 [2024-07-10 15:50:22.182337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.008 [2024-07-10 15:50:22.182551] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.008 [2024-07-10 15:50:22.182721] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.008 [2024-07-10 15:50:22.182745] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.008 [2024-07-10 15:50:22.182761] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.008 [2024-07-10 15:50:22.185053] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.008 [2024-07-10 15:50:22.194296] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.008 [2024-07-10 15:50:22.194663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-10 15:50:22.194811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-10 15:50:22.194837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.008 [2024-07-10 15:50:22.194853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.008 [2024-07-10 15:50:22.195044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.008 [2024-07-10 15:50:22.195180] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.008 [2024-07-10 15:50:22.195204] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.008 [2024-07-10 15:50:22.195219] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.008 [2024-07-10 15:50:22.197484] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.008 [2024-07-10 15:50:22.206903] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.008 [2024-07-10 15:50:22.207336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-10 15:50:22.207539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-10 15:50:22.207566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.008 [2024-07-10 15:50:22.207582] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.008 [2024-07-10 15:50:22.207744] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.008 [2024-07-10 15:50:22.207927] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.008 [2024-07-10 15:50:22.207950] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.008 [2024-07-10 15:50:22.207966] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.008 [2024-07-10 15:50:22.210333] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.008 [2024-07-10 15:50:22.219388] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.008 [2024-07-10 15:50:22.219725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-10 15:50:22.219981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-10 15:50:22.220032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.008 [2024-07-10 15:50:22.220049] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.008 [2024-07-10 15:50:22.220196] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.008 [2024-07-10 15:50:22.220347] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.008 [2024-07-10 15:50:22.220370] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.008 [2024-07-10 15:50:22.220386] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.008 [2024-07-10 15:50:22.222832] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.008 [2024-07-10 15:50:22.232136] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.008 [2024-07-10 15:50:22.232510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-10 15:50:22.232647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-10 15:50:22.232675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.008 [2024-07-10 15:50:22.232691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.008 [2024-07-10 15:50:22.232871] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.008 [2024-07-10 15:50:22.233023] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.008 [2024-07-10 15:50:22.233047] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.008 [2024-07-10 15:50:22.233063] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.008 [2024-07-10 15:50:22.235321] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.008 [2024-07-10 15:50:22.244753] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.008 [2024-07-10 15:50:22.245136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-10 15:50:22.245325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-10 15:50:22.245352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.008 [2024-07-10 15:50:22.245369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.008 [2024-07-10 15:50:22.245576] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.008 [2024-07-10 15:50:22.245765] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.008 [2024-07-10 15:50:22.245788] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.008 [2024-07-10 15:50:22.245804] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.008 [2024-07-10 15:50:22.248170] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.008 [2024-07-10 15:50:22.256981] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.008 [2024-07-10 15:50:22.257354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-10 15:50:22.257535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-10 15:50:22.257565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.008 [2024-07-10 15:50:22.257583] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.008 [2024-07-10 15:50:22.257748] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.008 [2024-07-10 15:50:22.257952] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.008 [2024-07-10 15:50:22.257976] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.008 [2024-07-10 15:50:22.257992] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.008 [2024-07-10 15:50:22.260412] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.008 [2024-07-10 15:50:22.269478] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.008 [2024-07-10 15:50:22.269940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-10 15:50:22.270283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.008 [2024-07-10 15:50:22.270334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.008 [2024-07-10 15:50:22.270352] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.008 [2024-07-10 15:50:22.270493] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.008 [2024-07-10 15:50:22.270699] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.008 [2024-07-10 15:50:22.270723] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.008 [2024-07-10 15:50:22.270738] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.008 [2024-07-10 15:50:22.272942] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.008 [2024-07-10 15:50:22.282020] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.009 [2024-07-10 15:50:22.282409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-10 15:50:22.282549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-10 15:50:22.282575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.009 [2024-07-10 15:50:22.282591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.009 [2024-07-10 15:50:22.282771] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.009 [2024-07-10 15:50:22.282935] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.009 [2024-07-10 15:50:22.282958] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.009 [2024-07-10 15:50:22.282974] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.009 [2024-07-10 15:50:22.285377] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.009 [2024-07-10 15:50:22.294511] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.009 [2024-07-10 15:50:22.294884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-10 15:50:22.295075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-10 15:50:22.295104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.009 [2024-07-10 15:50:22.295123] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.009 [2024-07-10 15:50:22.295288] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.009 [2024-07-10 15:50:22.295488] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.009 [2024-07-10 15:50:22.295513] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.009 [2024-07-10 15:50:22.295529] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.009 [2024-07-10 15:50:22.297826] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.009 [2024-07-10 15:50:22.307124] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.009 [2024-07-10 15:50:22.307464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-10 15:50:22.307690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-10 15:50:22.307725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.009 [2024-07-10 15:50:22.307744] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.009 [2024-07-10 15:50:22.307963] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.009 [2024-07-10 15:50:22.308114] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.009 [2024-07-10 15:50:22.308138] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.009 [2024-07-10 15:50:22.308153] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.009 [2024-07-10 15:50:22.310633] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.009 [2024-07-10 15:50:22.319516] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.009 [2024-07-10 15:50:22.320011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-10 15:50:22.320394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-10 15:50:22.320478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.009 [2024-07-10 15:50:22.320497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.009 [2024-07-10 15:50:22.320680] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.009 [2024-07-10 15:50:22.320832] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.009 [2024-07-10 15:50:22.320855] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.009 [2024-07-10 15:50:22.320871] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.009 [2024-07-10 15:50:22.323075] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.009 [2024-07-10 15:50:22.332049] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.009 [2024-07-10 15:50:22.332368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-10 15:50:22.332550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-10 15:50:22.332580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.009 [2024-07-10 15:50:22.332598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.009 [2024-07-10 15:50:22.332745] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.009 [2024-07-10 15:50:22.332915] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.009 [2024-07-10 15:50:22.332939] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.009 [2024-07-10 15:50:22.332954] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.009 [2024-07-10 15:50:22.335436] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.009 [2024-07-10 15:50:22.344658] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.009 [2024-07-10 15:50:22.345080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-10 15:50:22.345289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-10 15:50:22.345317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.009 [2024-07-10 15:50:22.345340] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.009 [2024-07-10 15:50:22.345516] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.009 [2024-07-10 15:50:22.345705] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.009 [2024-07-10 15:50:22.345728] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.009 [2024-07-10 15:50:22.345744] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.009 [2024-07-10 15:50:22.348289] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.009 [2024-07-10 15:50:22.357285] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.009 [2024-07-10 15:50:22.357668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-10 15:50:22.357912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-10 15:50:22.357938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.009 [2024-07-10 15:50:22.357953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.009 [2024-07-10 15:50:22.358094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.009 [2024-07-10 15:50:22.358253] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.009 [2024-07-10 15:50:22.358277] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.009 [2024-07-10 15:50:22.358293] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.009 [2024-07-10 15:50:22.360698] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.009 [2024-07-10 15:50:22.369963] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.009 [2024-07-10 15:50:22.370377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-10 15:50:22.370548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.009 [2024-07-10 15:50:22.370575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.009 [2024-07-10 15:50:22.370592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.009 [2024-07-10 15:50:22.370755] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.009 [2024-07-10 15:50:22.370944] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.009 [2024-07-10 15:50:22.370968] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.009 [2024-07-10 15:50:22.370983] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.009 [2024-07-10 15:50:22.373118] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.269 [2024-07-10 15:50:22.382627] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.269 [2024-07-10 15:50:22.383036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.269 [2024-07-10 15:50:22.383221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.269 [2024-07-10 15:50:22.383251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.269 [2024-07-10 15:50:22.383269] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.269 [2024-07-10 15:50:22.383450] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.269 [2024-07-10 15:50:22.383603] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.269 [2024-07-10 15:50:22.383628] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.269 [2024-07-10 15:50:22.383643] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.269 [2024-07-10 15:50:22.386011] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.269 [2024-07-10 15:50:22.395262] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.269 [2024-07-10 15:50:22.395663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.269 [2024-07-10 15:50:22.395827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.269 [2024-07-10 15:50:22.395856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.269 [2024-07-10 15:50:22.395874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.269 [2024-07-10 15:50:22.395985] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.269 [2024-07-10 15:50:22.396154] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.269 [2024-07-10 15:50:22.396177] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.269 [2024-07-10 15:50:22.396192] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.269 [2024-07-10 15:50:22.398717] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.269 [2024-07-10 15:50:22.407764] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.269 [2024-07-10 15:50:22.408097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.269 [2024-07-10 15:50:22.408312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.269 [2024-07-10 15:50:22.408337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.269 [2024-07-10 15:50:22.408353] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.269 [2024-07-10 15:50:22.408508] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.269 [2024-07-10 15:50:22.408719] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.269 [2024-07-10 15:50:22.408743] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.269 [2024-07-10 15:50:22.408759] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.269 [2024-07-10 15:50:22.411162] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.269 [2024-07-10 15:50:22.420421] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.269 [2024-07-10 15:50:22.420815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.269 [2024-07-10 15:50:22.421058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.269 [2024-07-10 15:50:22.421086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.269 [2024-07-10 15:50:22.421104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.269 [2024-07-10 15:50:22.421269] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.269 [2024-07-10 15:50:22.421491] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.269 [2024-07-10 15:50:22.421516] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.269 [2024-07-10 15:50:22.421532] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.269 [2024-07-10 15:50:22.423961] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.269 [2024-07-10 15:50:22.433157] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.269 [2024-07-10 15:50:22.433467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.269 [2024-07-10 15:50:22.433672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.269 [2024-07-10 15:50:22.433701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.269 [2024-07-10 15:50:22.433718] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.269 [2024-07-10 15:50:22.433884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.269 [2024-07-10 15:50:22.434106] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.269 [2024-07-10 15:50:22.434130] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.269 [2024-07-10 15:50:22.434145] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.269 [2024-07-10 15:50:22.436774] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.269 [2024-07-10 15:50:22.445867] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.269 [2024-07-10 15:50:22.446398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.269 [2024-07-10 15:50:22.446585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.269 [2024-07-10 15:50:22.446613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.269 [2024-07-10 15:50:22.446631] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.269 [2024-07-10 15:50:22.446797] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.269 [2024-07-10 15:50:22.446965] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.269 [2024-07-10 15:50:22.446989] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.269 [2024-07-10 15:50:22.447004] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.269 [2024-07-10 15:50:22.449339] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.269 [2024-07-10 15:50:22.458582] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.269 [2024-07-10 15:50:22.459084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.269 [2024-07-10 15:50:22.459479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.269 [2024-07-10 15:50:22.459508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.269 [2024-07-10 15:50:22.459526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.270 [2024-07-10 15:50:22.459691] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.270 [2024-07-10 15:50:22.459842] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.270 [2024-07-10 15:50:22.459871] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.270 [2024-07-10 15:50:22.459887] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.270 [2024-07-10 15:50:22.462184] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.270 [2024-07-10 15:50:22.471110] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.270 [2024-07-10 15:50:22.471507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.270 [2024-07-10 15:50:22.471669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.270 [2024-07-10 15:50:22.471699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.270 [2024-07-10 15:50:22.471717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.270 [2024-07-10 15:50:22.471828] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.270 [2024-07-10 15:50:22.472016] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.270 [2024-07-10 15:50:22.472039] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.270 [2024-07-10 15:50:22.472054] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.270 [2024-07-10 15:50:22.474250] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.270 [2024-07-10 15:50:22.484020] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.270 [2024-07-10 15:50:22.484319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.270 [2024-07-10 15:50:22.484539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.270 [2024-07-10 15:50:22.484573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.270 [2024-07-10 15:50:22.484591] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.270 [2024-07-10 15:50:22.484720] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.270 [2024-07-10 15:50:22.484835] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.270 [2024-07-10 15:50:22.484858] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.270 [2024-07-10 15:50:22.484873] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.270 [2024-07-10 15:50:22.487191] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.270 [2024-07-10 15:50:22.496716] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.270 [2024-07-10 15:50:22.497190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.270 [2024-07-10 15:50:22.497385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.270 [2024-07-10 15:50:22.497410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.270 [2024-07-10 15:50:22.497432] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.270 [2024-07-10 15:50:22.497586] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.270 [2024-07-10 15:50:22.497773] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.270 [2024-07-10 15:50:22.497797] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.270 [2024-07-10 15:50:22.497817] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.270 [2024-07-10 15:50:22.500022] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.270 [2024-07-10 15:50:22.509220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.270 [2024-07-10 15:50:22.509615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.270 [2024-07-10 15:50:22.509819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.270 [2024-07-10 15:50:22.509871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.270 [2024-07-10 15:50:22.509889] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.270 [2024-07-10 15:50:22.510072] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.270 [2024-07-10 15:50:22.510223] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.270 [2024-07-10 15:50:22.510246] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.270 [2024-07-10 15:50:22.510261] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.270 [2024-07-10 15:50:22.512655] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.270 [2024-07-10 15:50:22.521689] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.270 [2024-07-10 15:50:22.522073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.270 [2024-07-10 15:50:22.522232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.270 [2024-07-10 15:50:22.522260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.270 [2024-07-10 15:50:22.522277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.270 [2024-07-10 15:50:22.522489] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.270 [2024-07-10 15:50:22.522659] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.270 [2024-07-10 15:50:22.522683] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.270 [2024-07-10 15:50:22.522699] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.270 [2024-07-10 15:50:22.525061] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.270 [2024-07-10 15:50:22.534193] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.270 [2024-07-10 15:50:22.534526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.270 [2024-07-10 15:50:22.534697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.270 [2024-07-10 15:50:22.534726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.270 [2024-07-10 15:50:22.534743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.270 [2024-07-10 15:50:22.534891] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.270 [2024-07-10 15:50:22.535114] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.270 [2024-07-10 15:50:22.535138] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.270 [2024-07-10 15:50:22.535153] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.270 [2024-07-10 15:50:22.537462] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.270 [2024-07-10 15:50:22.546702] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.270 [2024-07-10 15:50:22.547045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.270 [2024-07-10 15:50:22.547261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.270 [2024-07-10 15:50:22.547289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.270 [2024-07-10 15:50:22.547307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.270 [2024-07-10 15:50:22.547487] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.270 [2024-07-10 15:50:22.547675] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.270 [2024-07-10 15:50:22.547699] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.270 [2024-07-10 15:50:22.547714] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.270 [2024-07-10 15:50:22.550077] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.270 [2024-07-10 15:50:22.559362] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.270 [2024-07-10 15:50:22.559724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.270 [2024-07-10 15:50:22.559859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.270 [2024-07-10 15:50:22.559886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.270 [2024-07-10 15:50:22.559902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.270 [2024-07-10 15:50:22.560018] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.270 [2024-07-10 15:50:22.560143] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.270 [2024-07-10 15:50:22.560167] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.270 [2024-07-10 15:50:22.560183] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.270 [2024-07-10 15:50:22.562370] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.270 [2024-07-10 15:50:22.572096] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.270 [2024-07-10 15:50:22.572510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.270 [2024-07-10 15:50:22.572701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.270 [2024-07-10 15:50:22.572742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.270 [2024-07-10 15:50:22.572758] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.270 [2024-07-10 15:50:22.572971] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.270 [2024-07-10 15:50:22.573158] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.270 [2024-07-10 15:50:22.573182] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.270 [2024-07-10 15:50:22.573198] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.270 [2024-07-10 15:50:22.575465] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.270 [2024-07-10 15:50:22.584659] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.270 [2024-07-10 15:50:22.584998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.270 [2024-07-10 15:50:22.585224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.270 [2024-07-10 15:50:22.585253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.271 [2024-07-10 15:50:22.585270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.271 [2024-07-10 15:50:22.585418] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.271 [2024-07-10 15:50:22.585617] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.271 [2024-07-10 15:50:22.585641] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.271 [2024-07-10 15:50:22.585656] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.271 [2024-07-10 15:50:22.588004] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.271 [2024-07-10 15:50:22.597311] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.271 [2024-07-10 15:50:22.597674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.271 [2024-07-10 15:50:22.597847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.271 [2024-07-10 15:50:22.597872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.271 [2024-07-10 15:50:22.597888] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.271 [2024-07-10 15:50:22.598064] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.271 [2024-07-10 15:50:22.598230] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.271 [2024-07-10 15:50:22.598254] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.271 [2024-07-10 15:50:22.598270] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.271 [2024-07-10 15:50:22.600626] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.271 [2024-07-10 15:50:22.609707] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.271 [2024-07-10 15:50:22.610160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.271 [2024-07-10 15:50:22.610360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.271 [2024-07-10 15:50:22.610388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.271 [2024-07-10 15:50:22.610405] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.271 [2024-07-10 15:50:22.610598] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.271 [2024-07-10 15:50:22.610769] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.271 [2024-07-10 15:50:22.610793] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.271 [2024-07-10 15:50:22.610809] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.271 [2024-07-10 15:50:22.613123] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.271 [2024-07-10 15:50:22.622400] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.271 [2024-07-10 15:50:22.622852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.271 [2024-07-10 15:50:22.623060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.271 [2024-07-10 15:50:22.623086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.271 [2024-07-10 15:50:22.623103] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.271 [2024-07-10 15:50:22.623283] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.271 [2024-07-10 15:50:22.623444] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.271 [2024-07-10 15:50:22.623469] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.271 [2024-07-10 15:50:22.623485] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.271 [2024-07-10 15:50:22.625836] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.271 [2024-07-10 15:50:22.635000] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.271 [2024-07-10 15:50:22.635361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.271 [2024-07-10 15:50:22.635542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.271 [2024-07-10 15:50:22.635572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.271 [2024-07-10 15:50:22.635590] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.271 [2024-07-10 15:50:22.635791] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.271 [2024-07-10 15:50:22.635997] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.271 [2024-07-10 15:50:22.636021] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.271 [2024-07-10 15:50:22.636036] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.271 [2024-07-10 15:50:22.638238] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.530 [2024-07-10 15:50:22.647485] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.530 [2024-07-10 15:50:22.647946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.530 [2024-07-10 15:50:22.648115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.530 [2024-07-10 15:50:22.648141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.530 [2024-07-10 15:50:22.648157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.530 [2024-07-10 15:50:22.648343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.530 [2024-07-10 15:50:22.648565] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.530 [2024-07-10 15:50:22.648591] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.530 [2024-07-10 15:50:22.648607] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.530 [2024-07-10 15:50:22.650869] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.530 [2024-07-10 15:50:22.660004] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.530 [2024-07-10 15:50:22.660451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.530 [2024-07-10 15:50:22.660689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.530 [2024-07-10 15:50:22.660715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.530 [2024-07-10 15:50:22.660736] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.530 [2024-07-10 15:50:22.660884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.530 [2024-07-10 15:50:22.661068] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.530 [2024-07-10 15:50:22.661092] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.530 [2024-07-10 15:50:22.661108] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.530 [2024-07-10 15:50:22.663386] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.530 [2024-07-10 15:50:22.672522] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.530 [2024-07-10 15:50:22.672893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.530 [2024-07-10 15:50:22.673129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.530 [2024-07-10 15:50:22.673158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.530 [2024-07-10 15:50:22.673176] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.530 [2024-07-10 15:50:22.673359] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.530 [2024-07-10 15:50:22.673522] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.530 [2024-07-10 15:50:22.673546] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.531 [2024-07-10 15:50:22.673562] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.531 [2024-07-10 15:50:22.675673] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.531 [2024-07-10 15:50:22.685202] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.531 [2024-07-10 15:50:22.685612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.531 [2024-07-10 15:50:22.685833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.531 [2024-07-10 15:50:22.685860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.531 [2024-07-10 15:50:22.685876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.531 [2024-07-10 15:50:22.686065] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.531 [2024-07-10 15:50:22.686209] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.531 [2024-07-10 15:50:22.686234] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.531 [2024-07-10 15:50:22.686251] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.531 [2024-07-10 15:50:22.688542] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.531 [2024-07-10 15:50:22.697667] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.531 [2024-07-10 15:50:22.698015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.531 [2024-07-10 15:50:22.698216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.531 [2024-07-10 15:50:22.698242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.531 [2024-07-10 15:50:22.698259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.531 [2024-07-10 15:50:22.698475] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.531 [2024-07-10 15:50:22.698645] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.531 [2024-07-10 15:50:22.698671] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.531 [2024-07-10 15:50:22.698687] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.531 [2024-07-10 15:50:22.701030] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.531 [2024-07-10 15:50:22.710119] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.531 [2024-07-10 15:50:22.710441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.531 [2024-07-10 15:50:22.710659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.531 [2024-07-10 15:50:22.710685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.531 [2024-07-10 15:50:22.710701] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.531 [2024-07-10 15:50:22.710889] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.531 [2024-07-10 15:50:22.711085] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.531 [2024-07-10 15:50:22.711109] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.531 [2024-07-10 15:50:22.711126] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.531 [2024-07-10 15:50:22.713589] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.531 [2024-07-10 15:50:22.722726] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.531 [2024-07-10 15:50:22.723192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.531 [2024-07-10 15:50:22.723367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.531 [2024-07-10 15:50:22.723395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.531 [2024-07-10 15:50:22.723413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.531 [2024-07-10 15:50:22.723532] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.531 [2024-07-10 15:50:22.723684] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.531 [2024-07-10 15:50:22.723708] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.531 [2024-07-10 15:50:22.723724] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.531 [2024-07-10 15:50:22.726017] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.531 [2024-07-10 15:50:22.735190] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.531 [2024-07-10 15:50:22.735534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.531 [2024-07-10 15:50:22.735717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.531 [2024-07-10 15:50:22.735748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.531 [2024-07-10 15:50:22.735766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.531 [2024-07-10 15:50:22.735914] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.531 [2024-07-10 15:50:22.736125] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.531 [2024-07-10 15:50:22.736148] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.531 [2024-07-10 15:50:22.736164] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.531 [2024-07-10 15:50:22.738528] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.531 [2024-07-10 15:50:22.748208] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.531 [2024-07-10 15:50:22.748564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.531 [2024-07-10 15:50:22.748731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.531 [2024-07-10 15:50:22.748757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.531 [2024-07-10 15:50:22.748774] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.531 [2024-07-10 15:50:22.749008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.531 [2024-07-10 15:50:22.749191] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.531 [2024-07-10 15:50:22.749215] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.531 [2024-07-10 15:50:22.749231] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.531 [2024-07-10 15:50:22.751551] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.531 [2024-07-10 15:50:22.760873] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.531 [2024-07-10 15:50:22.761222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.531 [2024-07-10 15:50:22.761398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.531 [2024-07-10 15:50:22.761434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.531 [2024-07-10 15:50:22.761454] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.531 [2024-07-10 15:50:22.761638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.531 [2024-07-10 15:50:22.761753] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.531 [2024-07-10 15:50:22.761776] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.531 [2024-07-10 15:50:22.761791] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.531 [2024-07-10 15:50:22.764125] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.531 [2024-07-10 15:50:22.773536] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.531 [2024-07-10 15:50:22.773975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.531 [2024-07-10 15:50:22.774121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.531 [2024-07-10 15:50:22.774149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.531 [2024-07-10 15:50:22.774167] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.531 [2024-07-10 15:50:22.774314] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.531 [2024-07-10 15:50:22.774475] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.531 [2024-07-10 15:50:22.774504] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.531 [2024-07-10 15:50:22.774520] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.531 [2024-07-10 15:50:22.776938] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.531 [2024-07-10 15:50:22.786084] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.531 [2024-07-10 15:50:22.786461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.531 [2024-07-10 15:50:22.786648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.531 [2024-07-10 15:50:22.786677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.531 [2024-07-10 15:50:22.786694] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.531 [2024-07-10 15:50:22.786895] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.531 [2024-07-10 15:50:22.787064] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.531 [2024-07-10 15:50:22.787087] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.531 [2024-07-10 15:50:22.787103] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.531 [2024-07-10 15:50:22.789695] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.531 [2024-07-10 15:50:22.798595] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.531 [2024-07-10 15:50:22.799067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.531 [2024-07-10 15:50:22.799220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.531 [2024-07-10 15:50:22.799249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.531 [2024-07-10 15:50:22.799266] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.531 [2024-07-10 15:50:22.799395] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.531 [2024-07-10 15:50:22.799573] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.531 [2024-07-10 15:50:22.799598] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.532 [2024-07-10 15:50:22.799613] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.532 [2024-07-10 15:50:22.801832] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.532 [2024-07-10 15:50:22.811154] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.532 [2024-07-10 15:50:22.811536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.532 [2024-07-10 15:50:22.811801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.532 [2024-07-10 15:50:22.811852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.532 [2024-07-10 15:50:22.811870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.532 [2024-07-10 15:50:22.812054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.532 [2024-07-10 15:50:22.812205] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.532 [2024-07-10 15:50:22.812229] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.532 [2024-07-10 15:50:22.812250] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.532 [2024-07-10 15:50:22.814620] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.532 [2024-07-10 15:50:22.823918] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.532 [2024-07-10 15:50:22.824304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.532 [2024-07-10 15:50:22.824521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.532 [2024-07-10 15:50:22.824548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.532 [2024-07-10 15:50:22.824565] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.532 [2024-07-10 15:50:22.824713] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.532 [2024-07-10 15:50:22.824912] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.532 [2024-07-10 15:50:22.824935] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.532 [2024-07-10 15:50:22.824951] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.532 [2024-07-10 15:50:22.827393] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.532 [2024-07-10 15:50:22.836414] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.532 [2024-07-10 15:50:22.836743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.532 [2024-07-10 15:50:22.836893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.532 [2024-07-10 15:50:22.836922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.532 [2024-07-10 15:50:22.836939] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.532 [2024-07-10 15:50:22.837086] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.532 [2024-07-10 15:50:22.837273] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.532 [2024-07-10 15:50:22.837297] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.532 [2024-07-10 15:50:22.837312] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.532 [2024-07-10 15:50:22.839408] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.532 [2024-07-10 15:50:22.848996] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.532 [2024-07-10 15:50:22.849354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.532 [2024-07-10 15:50:22.849559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.532 [2024-07-10 15:50:22.849615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.532 [2024-07-10 15:50:22.849634] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.532 [2024-07-10 15:50:22.849781] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.532 [2024-07-10 15:50:22.849968] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.532 [2024-07-10 15:50:22.849992] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.532 [2024-07-10 15:50:22.850007] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.532 [2024-07-10 15:50:22.852251] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.532 [2024-07-10 15:50:22.861577] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.532 [2024-07-10 15:50:22.861963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.532 [2024-07-10 15:50:22.862170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.532 [2024-07-10 15:50:22.862198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.532 [2024-07-10 15:50:22.862216] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.532 [2024-07-10 15:50:22.862417] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.532 [2024-07-10 15:50:22.862617] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.532 [2024-07-10 15:50:22.862641] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.532 [2024-07-10 15:50:22.862656] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.532 [2024-07-10 15:50:22.864912] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.532 [2024-07-10 15:50:22.874191] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.532 [2024-07-10 15:50:22.874575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.532 [2024-07-10 15:50:22.874813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.532 [2024-07-10 15:50:22.874869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.532 [2024-07-10 15:50:22.874887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.532 [2024-07-10 15:50:22.875071] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.532 [2024-07-10 15:50:22.875166] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.532 [2024-07-10 15:50:22.875189] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.532 [2024-07-10 15:50:22.875204] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.532 [2024-07-10 15:50:22.877616] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.532 [2024-07-10 15:50:22.886614] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.532 [2024-07-10 15:50:22.886975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.532 [2024-07-10 15:50:22.887154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.532 [2024-07-10 15:50:22.887182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.532 [2024-07-10 15:50:22.887200] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.532 [2024-07-10 15:50:22.887329] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.532 [2024-07-10 15:50:22.887509] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.532 [2024-07-10 15:50:22.887534] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.532 [2024-07-10 15:50:22.887549] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.532 [2024-07-10 15:50:22.889788] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.532 [2024-07-10 15:50:22.898971] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.532 [2024-07-10 15:50:22.899329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.532 [2024-07-10 15:50:22.899530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.532 [2024-07-10 15:50:22.899557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.532 [2024-07-10 15:50:22.899573] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.532 [2024-07-10 15:50:22.899737] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.532 [2024-07-10 15:50:22.899905] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.532 [2024-07-10 15:50:22.899928] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.532 [2024-07-10 15:50:22.899944] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.532 [2024-07-10 15:50:22.902328] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.790 [2024-07-10 15:50:22.911624] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.790 [2024-07-10 15:50:22.912026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.790 [2024-07-10 15:50:22.912236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.790 [2024-07-10 15:50:22.912264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.790 [2024-07-10 15:50:22.912282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.790 [2024-07-10 15:50:22.912440] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.790 [2024-07-10 15:50:22.912629] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.790 [2024-07-10 15:50:22.912653] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.790 [2024-07-10 15:50:22.912669] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.790 [2024-07-10 15:50:22.915106] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.790 [2024-07-10 15:50:22.924287] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.790 [2024-07-10 15:50:22.924602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.790 [2024-07-10 15:50:22.924800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.790 [2024-07-10 15:50:22.924827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.790 [2024-07-10 15:50:22.924843] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.790 [2024-07-10 15:50:22.925034] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.790 [2024-07-10 15:50:22.925186] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.790 [2024-07-10 15:50:22.925210] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.790 [2024-07-10 15:50:22.925226] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.790 [2024-07-10 15:50:22.927510] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.790 [2024-07-10 15:50:22.936660] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.790 [2024-07-10 15:50:22.937044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.790 [2024-07-10 15:50:22.937254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.790 [2024-07-10 15:50:22.937279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.790 [2024-07-10 15:50:22.937295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.790 [2024-07-10 15:50:22.937468] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.790 [2024-07-10 15:50:22.937656] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.790 [2024-07-10 15:50:22.937680] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.790 [2024-07-10 15:50:22.937695] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.790 [2024-07-10 15:50:22.940258] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.790 [2024-07-10 15:50:22.949182] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.790 [2024-07-10 15:50:22.949594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.790 [2024-07-10 15:50:22.949794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.790 [2024-07-10 15:50:22.949856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.790 [2024-07-10 15:50:22.949874] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.790 [2024-07-10 15:50:22.950021] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.790 [2024-07-10 15:50:22.950208] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.790 [2024-07-10 15:50:22.950232] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.790 [2024-07-10 15:50:22.950247] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.790 [2024-07-10 15:50:22.952693] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.790 [2024-07-10 15:50:22.961738] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.790 [2024-07-10 15:50:22.962146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.790 [2024-07-10 15:50:22.962323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.790 [2024-07-10 15:50:22.962363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.790 [2024-07-10 15:50:22.962379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.791 [2024-07-10 15:50:22.962555] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.791 [2024-07-10 15:50:22.962708] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.791 [2024-07-10 15:50:22.962731] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.791 [2024-07-10 15:50:22.962747] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.791 [2024-07-10 15:50:22.964965] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.791 [2024-07-10 15:50:22.974373] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.791 [2024-07-10 15:50:22.974734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.791 [2024-07-10 15:50:22.974911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.791 [2024-07-10 15:50:22.974944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.791 [2024-07-10 15:50:22.974963] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.791 [2024-07-10 15:50:22.975093] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.791 [2024-07-10 15:50:22.975226] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.791 [2024-07-10 15:50:22.975249] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.791 [2024-07-10 15:50:22.975264] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.791 [2024-07-10 15:50:22.977489] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.791 [2024-07-10 15:50:22.986832] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.791 [2024-07-10 15:50:22.987273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.791 [2024-07-10 15:50:22.987467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.791 [2024-07-10 15:50:22.987493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.791 [2024-07-10 15:50:22.987509] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.791 [2024-07-10 15:50:22.987701] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.791 [2024-07-10 15:50:22.987897] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.791 [2024-07-10 15:50:22.987921] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.791 [2024-07-10 15:50:22.987937] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.791 [2024-07-10 15:50:22.990452] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.791 [2024-07-10 15:50:22.999484] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.791 [2024-07-10 15:50:22.999823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.791 [2024-07-10 15:50:22.999962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.791 [2024-07-10 15:50:22.999986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.791 [2024-07-10 15:50:23.000018] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.791 [2024-07-10 15:50:23.000184] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.791 [2024-07-10 15:50:23.000371] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.791 [2024-07-10 15:50:23.000395] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.791 [2024-07-10 15:50:23.000410] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.791 [2024-07-10 15:50:23.002786] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.791 [2024-07-10 15:50:23.012097] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.791 [2024-07-10 15:50:23.012397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.791 [2024-07-10 15:50:23.012614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.791 [2024-07-10 15:50:23.012644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.791 [2024-07-10 15:50:23.012668] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.791 [2024-07-10 15:50:23.012835] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.791 [2024-07-10 15:50:23.013004] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.791 [2024-07-10 15:50:23.013028] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.791 [2024-07-10 15:50:23.013043] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.791 [2024-07-10 15:50:23.015616] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.791 [2024-07-10 15:50:23.024711] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.791 [2024-07-10 15:50:23.025054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.791 [2024-07-10 15:50:23.025261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.791 [2024-07-10 15:50:23.025290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.791 [2024-07-10 15:50:23.025308] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.791 [2024-07-10 15:50:23.025448] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.791 [2024-07-10 15:50:23.025618] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.791 [2024-07-10 15:50:23.025642] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.791 [2024-07-10 15:50:23.025658] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.791 [2024-07-10 15:50:23.028060] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.791 [2024-07-10 15:50:23.037361] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.791 [2024-07-10 15:50:23.037824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.791 [2024-07-10 15:50:23.038029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.791 [2024-07-10 15:50:23.038058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.791 [2024-07-10 15:50:23.038076] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.791 [2024-07-10 15:50:23.038223] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.791 [2024-07-10 15:50:23.038392] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.791 [2024-07-10 15:50:23.038416] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.791 [2024-07-10 15:50:23.038442] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.791 [2024-07-10 15:50:23.040880] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.791 [2024-07-10 15:50:23.049953] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.791 [2024-07-10 15:50:23.050240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.791 [2024-07-10 15:50:23.050508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.791 [2024-07-10 15:50:23.050537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.791 [2024-07-10 15:50:23.050554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.791 [2024-07-10 15:50:23.050749] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.791 [2024-07-10 15:50:23.050919] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.791 [2024-07-10 15:50:23.050943] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.791 [2024-07-10 15:50:23.050959] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.791 [2024-07-10 15:50:23.053269] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.791 [2024-07-10 15:50:23.062489] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.791 [2024-07-10 15:50:23.063024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.791 [2024-07-10 15:50:23.063376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.791 [2024-07-10 15:50:23.063433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.791 [2024-07-10 15:50:23.063453] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.791 [2024-07-10 15:50:23.063618] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.791 [2024-07-10 15:50:23.063751] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.791 [2024-07-10 15:50:23.063774] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.791 [2024-07-10 15:50:23.063790] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.791 [2024-07-10 15:50:23.066049] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.791 [2024-07-10 15:50:23.075093] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.791 [2024-07-10 15:50:23.075549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.791 [2024-07-10 15:50:23.075779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.791 [2024-07-10 15:50:23.075832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.791 [2024-07-10 15:50:23.075850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.791 [2024-07-10 15:50:23.076015] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.791 [2024-07-10 15:50:23.076237] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.791 [2024-07-10 15:50:23.076261] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.791 [2024-07-10 15:50:23.076277] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.791 [2024-07-10 15:50:23.078683] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.791 [2024-07-10 15:50:23.087558] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.791 [2024-07-10 15:50:23.088014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.791 [2024-07-10 15:50:23.088183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.791 [2024-07-10 15:50:23.088210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.791 [2024-07-10 15:50:23.088226] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.791 [2024-07-10 15:50:23.088393] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.792 [2024-07-10 15:50:23.088596] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.792 [2024-07-10 15:50:23.088621] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.792 [2024-07-10 15:50:23.088637] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.792 [2024-07-10 15:50:23.090873] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.792 [2024-07-10 15:50:23.100238] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.792 [2024-07-10 15:50:23.100603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.792 [2024-07-10 15:50:23.100769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.792 [2024-07-10 15:50:23.100812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.792 [2024-07-10 15:50:23.100831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.792 [2024-07-10 15:50:23.100978] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.792 [2024-07-10 15:50:23.101182] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.792 [2024-07-10 15:50:23.101206] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.792 [2024-07-10 15:50:23.101221] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.792 [2024-07-10 15:50:23.103759] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.792 [2024-07-10 15:50:23.112872] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.792 [2024-07-10 15:50:23.113251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.792 [2024-07-10 15:50:23.113415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.792 [2024-07-10 15:50:23.113453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.792 [2024-07-10 15:50:23.113471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.792 [2024-07-10 15:50:23.113619] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.792 [2024-07-10 15:50:23.113788] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.792 [2024-07-10 15:50:23.113811] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.792 [2024-07-10 15:50:23.113827] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.792 [2024-07-10 15:50:23.115921] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.792 [2024-07-10 15:50:23.125459] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.792 [2024-07-10 15:50:23.125829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.792 [2024-07-10 15:50:23.126175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.792 [2024-07-10 15:50:23.126235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.792 [2024-07-10 15:50:23.126253] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.792 [2024-07-10 15:50:23.126381] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.792 [2024-07-10 15:50:23.126559] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.792 [2024-07-10 15:50:23.126589] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.792 [2024-07-10 15:50:23.126605] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.792 [2024-07-10 15:50:23.128879] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.792 [2024-07-10 15:50:23.137972] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.792 [2024-07-10 15:50:23.138486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.792 [2024-07-10 15:50:23.138696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.792 [2024-07-10 15:50:23.138721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.792 [2024-07-10 15:50:23.138738] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.792 [2024-07-10 15:50:23.138820] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.792 [2024-07-10 15:50:23.139010] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.792 [2024-07-10 15:50:23.139034] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.792 [2024-07-10 15:50:23.139049] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.792 [2024-07-10 15:50:23.141394] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.792 [2024-07-10 15:50:23.150417] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.792 [2024-07-10 15:50:23.150784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.792 [2024-07-10 15:50:23.150960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.792 [2024-07-10 15:50:23.151004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.792 [2024-07-10 15:50:23.151023] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.792 [2024-07-10 15:50:23.151225] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.792 [2024-07-10 15:50:23.151358] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.792 [2024-07-10 15:50:23.151381] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.792 [2024-07-10 15:50:23.151397] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:43.792 [2024-07-10 15:50:23.153589] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:43.792 [2024-07-10 15:50:23.163071] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:43.792 [2024-07-10 15:50:23.163409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.792 [2024-07-10 15:50:23.163562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:43.792 [2024-07-10 15:50:23.163591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:43.792 [2024-07-10 15:50:23.163609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:43.792 [2024-07-10 15:50:23.163757] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:43.792 [2024-07-10 15:50:23.163948] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:43.792 [2024-07-10 15:50:23.163981] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:43.792 [2024-07-10 15:50:23.164020] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.051 [2024-07-10 15:50:23.166451] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.051 [2024-07-10 15:50:23.175577] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.051 [2024-07-10 15:50:23.175930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.051 [2024-07-10 15:50:23.176084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.051 [2024-07-10 15:50:23.176113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.051 [2024-07-10 15:50:23.176131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.051 [2024-07-10 15:50:23.176278] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.051 [2024-07-10 15:50:23.176476] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.051 [2024-07-10 15:50:23.176501] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.051 [2024-07-10 15:50:23.176517] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.051 [2024-07-10 15:50:23.178829] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.051 [2024-07-10 15:50:23.188141] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.051 [2024-07-10 15:50:23.188525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.051 [2024-07-10 15:50:23.188719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.051 [2024-07-10 15:50:23.188747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.051 [2024-07-10 15:50:23.188766] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.051 [2024-07-10 15:50:23.188931] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.051 [2024-07-10 15:50:23.189082] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.051 [2024-07-10 15:50:23.189105] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.051 [2024-07-10 15:50:23.189121] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.051 [2024-07-10 15:50:23.191493] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.051 [2024-07-10 15:50:23.200722] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.051 [2024-07-10 15:50:23.201117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.051 [2024-07-10 15:50:23.201333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.051 [2024-07-10 15:50:23.201359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.051 [2024-07-10 15:50:23.201375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.051 [2024-07-10 15:50:23.201500] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.051 [2024-07-10 15:50:23.201667] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.051 [2024-07-10 15:50:23.201691] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.051 [2024-07-10 15:50:23.201706] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.051 [2024-07-10 15:50:23.203952] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.051 [2024-07-10 15:50:23.213155] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.051 [2024-07-10 15:50:23.213507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.051 [2024-07-10 15:50:23.213752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.051 [2024-07-10 15:50:23.213803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.051 [2024-07-10 15:50:23.213821] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.051 [2024-07-10 15:50:23.213987] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.051 [2024-07-10 15:50:23.214175] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.051 [2024-07-10 15:50:23.214199] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.051 [2024-07-10 15:50:23.214214] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.051 [2024-07-10 15:50:23.216626] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.051 [2024-07-10 15:50:23.225703] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.051 [2024-07-10 15:50:23.226120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.051 [2024-07-10 15:50:23.226321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.051 [2024-07-10 15:50:23.226350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.051 [2024-07-10 15:50:23.226367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.051 [2024-07-10 15:50:23.226488] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.051 [2024-07-10 15:50:23.226676] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.051 [2024-07-10 15:50:23.226699] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.051 [2024-07-10 15:50:23.226715] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.051 [2024-07-10 15:50:23.229112] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.051 [2024-07-10 15:50:23.238087] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.051 [2024-07-10 15:50:23.238560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.051 [2024-07-10 15:50:23.238801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.051 [2024-07-10 15:50:23.238860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.051 [2024-07-10 15:50:23.238878] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.051 [2024-07-10 15:50:23.239061] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.051 [2024-07-10 15:50:23.239248] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.051 [2024-07-10 15:50:23.239271] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.051 [2024-07-10 15:50:23.239287] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.051 [2024-07-10 15:50:23.241690] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.051 [2024-07-10 15:50:23.250634] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.051 [2024-07-10 15:50:23.251040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.051 [2024-07-10 15:50:23.251197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.051 [2024-07-10 15:50:23.251225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.051 [2024-07-10 15:50:23.251243] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.051 [2024-07-10 15:50:23.251436] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.051 [2024-07-10 15:50:23.251624] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.051 [2024-07-10 15:50:23.251647] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.052 [2024-07-10 15:50:23.251663] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.052 [2024-07-10 15:50:23.253936] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.052 [2024-07-10 15:50:23.263236] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.052 [2024-07-10 15:50:23.263583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.052 [2024-07-10 15:50:23.263858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.052 [2024-07-10 15:50:23.263907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.052 [2024-07-10 15:50:23.263925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.052 [2024-07-10 15:50:23.264144] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.052 [2024-07-10 15:50:23.264295] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.052 [2024-07-10 15:50:23.264318] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.052 [2024-07-10 15:50:23.264333] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.052 [2024-07-10 15:50:23.266599] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.052 [2024-07-10 15:50:23.275773] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.052 [2024-07-10 15:50:23.276172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.052 [2024-07-10 15:50:23.276378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.052 [2024-07-10 15:50:23.276407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.052 [2024-07-10 15:50:23.276434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.052 [2024-07-10 15:50:23.276638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.052 [2024-07-10 15:50:23.276825] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.052 [2024-07-10 15:50:23.276849] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.052 [2024-07-10 15:50:23.276864] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.052 [2024-07-10 15:50:23.279175] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.052 [2024-07-10 15:50:23.288434] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.052 [2024-07-10 15:50:23.288821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.052 [2024-07-10 15:50:23.289047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.052 [2024-07-10 15:50:23.289103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.052 [2024-07-10 15:50:23.289120] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.052 [2024-07-10 15:50:23.289303] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.052 [2024-07-10 15:50:23.289465] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.052 [2024-07-10 15:50:23.289489] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.052 [2024-07-10 15:50:23.289505] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.052 [2024-07-10 15:50:23.291794] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.052 [2024-07-10 15:50:23.300902] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.052 [2024-07-10 15:50:23.301244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.052 [2024-07-10 15:50:23.301433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.052 [2024-07-10 15:50:23.301462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.052 [2024-07-10 15:50:23.301480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.052 [2024-07-10 15:50:23.301663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.052 [2024-07-10 15:50:23.301869] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.052 [2024-07-10 15:50:23.301892] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.052 [2024-07-10 15:50:23.301908] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.052 [2024-07-10 15:50:23.304323] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.052 [2024-07-10 15:50:23.313399] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.052 [2024-07-10 15:50:23.313753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.052 [2024-07-10 15:50:23.313943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.052 [2024-07-10 15:50:23.313967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.052 [2024-07-10 15:50:23.313983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.052 [2024-07-10 15:50:23.314143] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.052 [2024-07-10 15:50:23.314312] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.052 [2024-07-10 15:50:23.314335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.052 [2024-07-10 15:50:23.314350] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.052 [2024-07-10 15:50:23.316597] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.052 [2024-07-10 15:50:23.326219] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.052 [2024-07-10 15:50:23.326508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.052 [2024-07-10 15:50:23.326686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.052 [2024-07-10 15:50:23.326720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.052 [2024-07-10 15:50:23.326739] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.052 [2024-07-10 15:50:23.326922] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.052 [2024-07-10 15:50:23.327092] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.052 [2024-07-10 15:50:23.327115] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.052 [2024-07-10 15:50:23.327130] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.052 [2024-07-10 15:50:23.329500] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.052 [2024-07-10 15:50:23.338540] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.052 [2024-07-10 15:50:23.339030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.052 [2024-07-10 15:50:23.339286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.052 [2024-07-10 15:50:23.339312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.052 [2024-07-10 15:50:23.339328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.052 [2024-07-10 15:50:23.339501] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.052 [2024-07-10 15:50:23.339637] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.052 [2024-07-10 15:50:23.339659] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.052 [2024-07-10 15:50:23.339673] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.052 [2024-07-10 15:50:23.342011] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.052 [2024-07-10 15:50:23.351225] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.052 [2024-07-10 15:50:23.351547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.052 [2024-07-10 15:50:23.351713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.052 [2024-07-10 15:50:23.351742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.052 [2024-07-10 15:50:23.351759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.052 [2024-07-10 15:50:23.351924] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.052 [2024-07-10 15:50:23.352057] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.052 [2024-07-10 15:50:23.352080] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.052 [2024-07-10 15:50:23.352096] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.052 [2024-07-10 15:50:23.354577] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.052 [2024-07-10 15:50:23.363579] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.052 [2024-07-10 15:50:23.363921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.052 [2024-07-10 15:50:23.364111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.052 [2024-07-10 15:50:23.364163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.053 [2024-07-10 15:50:23.364205] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.053 [2024-07-10 15:50:23.364408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.053 [2024-07-10 15:50:23.364570] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.053 [2024-07-10 15:50:23.364594] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.053 [2024-07-10 15:50:23.364610] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.053 [2024-07-10 15:50:23.366830] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.053 [2024-07-10 15:50:23.376180] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.053 [2024-07-10 15:50:23.376587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.053 [2024-07-10 15:50:23.376812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.053 [2024-07-10 15:50:23.376838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.053 [2024-07-10 15:50:23.376853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.053 [2024-07-10 15:50:23.376999] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.053 [2024-07-10 15:50:23.377133] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.053 [2024-07-10 15:50:23.377156] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.053 [2024-07-10 15:50:23.377172] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.053 [2024-07-10 15:50:23.379550] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.053 [2024-07-10 15:50:23.388779] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.053 [2024-07-10 15:50:23.389169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.053 [2024-07-10 15:50:23.389333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.053 [2024-07-10 15:50:23.389357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.053 [2024-07-10 15:50:23.389389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.053 [2024-07-10 15:50:23.389529] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.053 [2024-07-10 15:50:23.389716] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.053 [2024-07-10 15:50:23.389740] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.053 [2024-07-10 15:50:23.389756] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.053 [2024-07-10 15:50:23.392084] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.053 [2024-07-10 15:50:23.401460] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.053 [2024-07-10 15:50:23.401844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.053 [2024-07-10 15:50:23.402022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.053 [2024-07-10 15:50:23.402050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.053 [2024-07-10 15:50:23.402068] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.053 [2024-07-10 15:50:23.402292] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.053 [2024-07-10 15:50:23.402490] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.053 [2024-07-10 15:50:23.402515] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.053 [2024-07-10 15:50:23.402531] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.053 [2024-07-10 15:50:23.404804] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.053 [2024-07-10 15:50:23.414120] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.053 [2024-07-10 15:50:23.414461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.053 [2024-07-10 15:50:23.414664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.053 [2024-07-10 15:50:23.414693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.053 [2024-07-10 15:50:23.414710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.053 [2024-07-10 15:50:23.414876] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.053 [2024-07-10 15:50:23.415045] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.053 [2024-07-10 15:50:23.415068] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.053 [2024-07-10 15:50:23.415084] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.053 [2024-07-10 15:50:23.417309] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.313 [2024-07-10 15:50:23.426882] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.313 [2024-07-10 15:50:23.427459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.313 [2024-07-10 15:50:23.427832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.313 [2024-07-10 15:50:23.427883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.313 [2024-07-10 15:50:23.427902] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.313 [2024-07-10 15:50:23.428067] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.313 [2024-07-10 15:50:23.428163] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.313 [2024-07-10 15:50:23.428186] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.313 [2024-07-10 15:50:23.428202] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.313 [2024-07-10 15:50:23.430491] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.313 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2229849 Killed "${NVMF_APP[@]}" "$@" 00:26:44.313 15:50:23 -- host/bdevperf.sh@36 -- # tgt_init 00:26:44.313 15:50:23 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:44.313 15:50:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:44.313 15:50:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:44.313 15:50:23 -- common/autotest_common.sh@10 -- # set +x 00:26:44.313 [2024-07-10 15:50:23.439351] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.313 [2024-07-10 15:50:23.439774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.313 [2024-07-10 15:50:23.439932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.313 [2024-07-10 15:50:23.439962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.313 [2024-07-10 15:50:23.439985] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.313 [2024-07-10 15:50:23.440188] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.313 [2024-07-10 15:50:23.440339] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.313 [2024-07-10 15:50:23.440363] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.313 [2024-07-10 15:50:23.440379] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.313 15:50:23 -- nvmf/common.sh@469 -- # nvmfpid=2230894 00:26:44.313 15:50:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:44.313 15:50:23 -- nvmf/common.sh@470 -- # waitforlisten 2230894 00:26:44.313 15:50:23 -- common/autotest_common.sh@819 -- # '[' -z 2230894 ']' 00:26:44.313 15:50:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.313 15:50:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:44.313 15:50:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.313 15:50:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:44.313 15:50:23 -- common/autotest_common.sh@10 -- # set +x 00:26:44.313 [2024-07-10 15:50:23.442662] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.313 [2024-07-10 15:50:23.451636] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.313 [2024-07-10 15:50:23.452035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.313 [2024-07-10 15:50:23.452296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.314 [2024-07-10 15:50:23.452321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.314 [2024-07-10 15:50:23.452337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.314 [2024-07-10 15:50:23.452540] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.314 [2024-07-10 15:50:23.452646] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.314 [2024-07-10 15:50:23.452668] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.314 [2024-07-10 15:50:23.452681] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.314 [2024-07-10 15:50:23.454662] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.314 [2024-07-10 15:50:23.463939] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.314 [2024-07-10 15:50:23.464302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.314 [2024-07-10 15:50:23.464469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.314 [2024-07-10 15:50:23.464496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.314 [2024-07-10 15:50:23.464512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.314 [2024-07-10 15:50:23.464612] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.314 [2024-07-10 15:50:23.464761] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.314 [2024-07-10 15:50:23.464795] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.314 [2024-07-10 15:50:23.464813] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.314 [2024-07-10 15:50:23.466830] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.314 [2024-07-10 15:50:23.476323] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.314 [2024-07-10 15:50:23.476751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.314 [2024-07-10 15:50:23.476922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.314 [2024-07-10 15:50:23.476948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.314 [2024-07-10 15:50:23.476965] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.314 [2024-07-10 15:50:23.477162] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.314 [2024-07-10 15:50:23.477279] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.314 [2024-07-10 15:50:23.477299] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.314 [2024-07-10 15:50:23.477314] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.314 [2024-07-10 15:50:23.479460] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.314 [2024-07-10 15:50:23.480691] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:44.314 [2024-07-10 15:50:23.480770] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.314 [2024-07-10 15:50:23.488774] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.314 [2024-07-10 15:50:23.489130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.314 [2024-07-10 15:50:23.489325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.314 [2024-07-10 15:50:23.489350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.314 [2024-07-10 15:50:23.489366] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.314 [2024-07-10 15:50:23.489507] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.314 [2024-07-10 15:50:23.489660] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.314 [2024-07-10 15:50:23.489682] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.314 [2024-07-10 15:50:23.489696] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.314 [2024-07-10 15:50:23.492012] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.314 [2024-07-10 15:50:23.501040] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.314 [2024-07-10 15:50:23.501331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.314 [2024-07-10 15:50:23.501491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.314 [2024-07-10 15:50:23.501517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.314 [2024-07-10 15:50:23.501533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.314 [2024-07-10 15:50:23.501650] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.314 [2024-07-10 15:50:23.501814] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.314 [2024-07-10 15:50:23.501839] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.314 [2024-07-10 15:50:23.501868] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.314 [2024-07-10 15:50:23.504098] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.314 [2024-07-10 15:50:23.513401] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.314 [2024-07-10 15:50:23.513727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.314 [2024-07-10 15:50:23.513882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.314 [2024-07-10 15:50:23.513908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.314 [2024-07-10 15:50:23.513923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.314 [2024-07-10 15:50:23.514055] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.314 [2024-07-10 15:50:23.514216] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.314 [2024-07-10 15:50:23.514236] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.314 [2024-07-10 15:50:23.514248] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.314 [2024-07-10 15:50:23.516157] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.314 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.314 [2024-07-10 15:50:23.526081] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.314 [2024-07-10 15:50:23.526495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.314 [2024-07-10 15:50:23.526644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.314 [2024-07-10 15:50:23.526670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.314 [2024-07-10 15:50:23.526686] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.314 [2024-07-10 15:50:23.526803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.314 [2024-07-10 15:50:23.527001] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.314 [2024-07-10 15:50:23.527021] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.314 [2024-07-10 15:50:23.527035] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.314 [2024-07-10 15:50:23.529486] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.314 [2024-07-10 15:50:23.538631] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.314 [2024-07-10 15:50:23.539004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.314 [2024-07-10 15:50:23.539205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.314 [2024-07-10 15:50:23.539231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.314 [2024-07-10 15:50:23.539247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.314 [2024-07-10 15:50:23.539363] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.314 [2024-07-10 15:50:23.539540] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.314 [2024-07-10 15:50:23.539567] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.314 [2024-07-10 15:50:23.539582] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.314 [2024-07-10 15:50:23.542026] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.314 [2024-07-10 15:50:23.551164] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.314 [2024-07-10 15:50:23.551519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.314 [2024-07-10 15:50:23.551673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.314 [2024-07-10 15:50:23.551699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.314 [2024-07-10 15:50:23.551715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.314 [2024-07-10 15:50:23.551839] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.314 [2024-07-10 15:50:23.552034] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.314 [2024-07-10 15:50:23.552058] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.314 [2024-07-10 15:50:23.552073] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.314 [2024-07-10 15:50:23.553523] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:44.314 [2024-07-10 15:50:23.554639] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.314 [2024-07-10 15:50:23.563753] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.314 [2024-07-10 15:50:23.564319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.314 [2024-07-10 15:50:23.564519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.314 [2024-07-10 15:50:23.564547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.314 [2024-07-10 15:50:23.564566] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.314 [2024-07-10 15:50:23.564759] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.314 [2024-07-10 15:50:23.564888] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.314 [2024-07-10 15:50:23.564909] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.314 [2024-07-10 15:50:23.564924] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.315 [2024-07-10 15:50:23.567403] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.315 [2024-07-10 15:50:23.576489] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.315 [2024-07-10 15:50:23.576910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.315 [2024-07-10 15:50:23.577055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.315 [2024-07-10 15:50:23.577081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.315 [2024-07-10 15:50:23.577098] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.315 [2024-07-10 15:50:23.577251] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.315 [2024-07-10 15:50:23.577386] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.315 [2024-07-10 15:50:23.577405] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.315 [2024-07-10 15:50:23.577460] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.315 [2024-07-10 15:50:23.579566] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.315 [2024-07-10 15:50:23.588642] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.315 [2024-07-10 15:50:23.588993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.315 [2024-07-10 15:50:23.589170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.315 [2024-07-10 15:50:23.589195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.315 [2024-07-10 15:50:23.589211] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.315 [2024-07-10 15:50:23.589376] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.315 [2024-07-10 15:50:23.589536] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.315 [2024-07-10 15:50:23.589558] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.315 [2024-07-10 15:50:23.589572] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.315 [2024-07-10 15:50:23.591587] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.315 [2024-07-10 15:50:23.600902] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.315 [2024-07-10 15:50:23.601315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.315 [2024-07-10 15:50:23.601506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.315 [2024-07-10 15:50:23.601532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.315 [2024-07-10 15:50:23.601548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.315 [2024-07-10 15:50:23.601714] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.315 [2024-07-10 15:50:23.601955] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.315 [2024-07-10 15:50:23.601975] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.315 [2024-07-10 15:50:23.601988] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.315 [2024-07-10 15:50:23.604017] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.315 [2024-07-10 15:50:23.613324] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.315 [2024-07-10 15:50:23.613719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.315 [2024-07-10 15:50:23.613901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.315 [2024-07-10 15:50:23.613926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.315 [2024-07-10 15:50:23.613943] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.315 [2024-07-10 15:50:23.614098] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.315 [2024-07-10 15:50:23.614296] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.315 [2024-07-10 15:50:23.614318] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.315 [2024-07-10 15:50:23.614332] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.315 [2024-07-10 15:50:23.616459] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.315 [2024-07-10 15:50:23.625932] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.315 [2024-07-10 15:50:23.626448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.315 [2024-07-10 15:50:23.626599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.315 [2024-07-10 15:50:23.626625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.315 [2024-07-10 15:50:23.626645] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.315 [2024-07-10 15:50:23.626811] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.315 [2024-07-10 15:50:23.626936] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.315 [2024-07-10 15:50:23.626957] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.315 [2024-07-10 15:50:23.626975] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.315 [2024-07-10 15:50:23.629192] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.315 [2024-07-10 15:50:23.638289] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.315 [2024-07-10 15:50:23.638630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.315 [2024-07-10 15:50:23.638851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.315 [2024-07-10 15:50:23.638877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.315 [2024-07-10 15:50:23.638895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.315 [2024-07-10 15:50:23.639054] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.315 [2024-07-10 15:50:23.639228] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.315 [2024-07-10 15:50:23.639249] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.315 [2024-07-10 15:50:23.639263] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.315 [2024-07-10 15:50:23.641373] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.315 [2024-07-10 15:50:23.650637] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.315 [2024-07-10 15:50:23.651035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.315 [2024-07-10 15:50:23.651252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.315 [2024-07-10 15:50:23.651279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.315 [2024-07-10 15:50:23.651297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.315 [2024-07-10 15:50:23.651494] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.315 [2024-07-10 15:50:23.651638] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.315 [2024-07-10 15:50:23.651658] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.315 [2024-07-10 15:50:23.651672] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.315 [2024-07-10 15:50:23.653896] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.315 [2024-07-10 15:50:23.663056] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.315 [2024-07-10 15:50:23.663389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.315 [2024-07-10 15:50:23.663586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.315 [2024-07-10 15:50:23.663613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.315 [2024-07-10 15:50:23.663629] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.315 [2024-07-10 15:50:23.663804] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.315 [2024-07-10 15:50:23.663900] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.315 [2024-07-10 15:50:23.663924] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.315 [2024-07-10 15:50:23.663939] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.315 [2024-07-10 15:50:23.666229] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.315 [2024-07-10 15:50:23.668240] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:44.315 [2024-07-10 15:50:23.668353] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.315 [2024-07-10 15:50:23.668369] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.315 [2024-07-10 15:50:23.668382] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.315 [2024-07-10 15:50:23.668470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:44.315 [2024-07-10 15:50:23.668507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:44.315 [2024-07-10 15:50:23.668510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.315 [2024-07-10 15:50:23.675224] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.315 [2024-07-10 15:50:23.675630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.315 [2024-07-10 15:50:23.675815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.315 [2024-07-10 15:50:23.675841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.315 [2024-07-10 15:50:23.675859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.315 [2024-07-10 15:50:23.676046] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.315 [2024-07-10 15:50:23.676194] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.315 [2024-07-10 15:50:23.676215] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.315 [2024-07-10 15:50:23.676231] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.315 [2024-07-10 15:50:23.678519] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.574 [2024-07-10 15:50:23.687792] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.574 [2024-07-10 15:50:23.688192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-10 15:50:23.688352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-10 15:50:23.688380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.574 [2024-07-10 15:50:23.688399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.574 [2024-07-10 15:50:23.688584] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.574 [2024-07-10 15:50:23.688748] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.574 [2024-07-10 15:50:23.688769] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.574 [2024-07-10 15:50:23.688783] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.574 [2024-07-10 15:50:23.690730] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.574 [2024-07-10 15:50:23.700159] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.574 [2024-07-10 15:50:23.700565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-10 15:50:23.700733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-10 15:50:23.700759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.574 [2024-07-10 15:50:23.700778] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.574 [2024-07-10 15:50:23.700936] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.574 [2024-07-10 15:50:23.701101] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.574 [2024-07-10 15:50:23.701122] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.574 [2024-07-10 15:50:23.701138] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.574 [2024-07-10 15:50:23.703120] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.574 [2024-07-10 15:50:23.712491] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.574 [2024-07-10 15:50:23.713031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-10 15:50:23.713173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-10 15:50:23.713200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.574 [2024-07-10 15:50:23.713220] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.574 [2024-07-10 15:50:23.713453] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.574 [2024-07-10 15:50:23.713643] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.574 [2024-07-10 15:50:23.713664] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.574 [2024-07-10 15:50:23.713681] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.574 [2024-07-10 15:50:23.715745] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.574 [2024-07-10 15:50:23.724697] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.574 [2024-07-10 15:50:23.725165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.574 [2024-07-10 15:50:23.725310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.725336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.575 [2024-07-10 15:50:23.725355] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.575 [2024-07-10 15:50:23.725504] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.575 [2024-07-10 15:50:23.725643] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.575 [2024-07-10 15:50:23.725671] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.575 [2024-07-10 15:50:23.725687] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.575 [2024-07-10 15:50:23.727806] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.575 [2024-07-10 15:50:23.737226] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.575 [2024-07-10 15:50:23.737671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.737870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.737896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.575 [2024-07-10 15:50:23.737915] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.575 [2024-07-10 15:50:23.738058] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.575 [2024-07-10 15:50:23.738207] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.575 [2024-07-10 15:50:23.738227] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.575 [2024-07-10 15:50:23.738243] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.575 [2024-07-10 15:50:23.740145] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.575 [2024-07-10 15:50:23.749627] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.575 [2024-07-10 15:50:23.750011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.750161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.750187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.575 [2024-07-10 15:50:23.750204] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.575 [2024-07-10 15:50:23.750307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.575 [2024-07-10 15:50:23.750466] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.575 [2024-07-10 15:50:23.750502] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.575 [2024-07-10 15:50:23.750518] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.575 [2024-07-10 15:50:23.752589] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.575 [2024-07-10 15:50:23.761840] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.575 [2024-07-10 15:50:23.762209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.762357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.762382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.575 [2024-07-10 15:50:23.762399] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.575 [2024-07-10 15:50:23.762566] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.575 [2024-07-10 15:50:23.762720] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.575 [2024-07-10 15:50:23.762756] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.575 [2024-07-10 15:50:23.762780] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.575 [2024-07-10 15:50:23.764817] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.575 [2024-07-10 15:50:23.774225] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.575 [2024-07-10 15:50:23.774611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.774795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.774820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.575 [2024-07-10 15:50:23.774837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.575 [2024-07-10 15:50:23.775002] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.575 [2024-07-10 15:50:23.775132] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.575 [2024-07-10 15:50:23.775151] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.575 [2024-07-10 15:50:23.775165] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.575 [2024-07-10 15:50:23.777156] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.575 [2024-07-10 15:50:23.786482] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.575 [2024-07-10 15:50:23.786795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.786964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.786988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.575 [2024-07-10 15:50:23.787005] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.575 [2024-07-10 15:50:23.787186] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.575 [2024-07-10 15:50:23.787345] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.575 [2024-07-10 15:50:23.787365] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.575 [2024-07-10 15:50:23.787379] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.575 [2024-07-10 15:50:23.789413] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.575 [2024-07-10 15:50:23.798836] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.575 [2024-07-10 15:50:23.799228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.799381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.799406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.575 [2024-07-10 15:50:23.799440] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.575 [2024-07-10 15:50:23.799591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.575 [2024-07-10 15:50:23.799800] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.575 [2024-07-10 15:50:23.799820] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.575 [2024-07-10 15:50:23.799833] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.575 [2024-07-10 15:50:23.801932] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.575 [2024-07-10 15:50:23.811326] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.575 [2024-07-10 15:50:23.811638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.811809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.811834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.575 [2024-07-10 15:50:23.811850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.575 [2024-07-10 15:50:23.812031] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.575 [2024-07-10 15:50:23.812203] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.575 [2024-07-10 15:50:23.812230] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.575 [2024-07-10 15:50:23.812244] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.575 [2024-07-10 15:50:23.814287] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.575 [2024-07-10 15:50:23.823543] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.575 [2024-07-10 15:50:23.823905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.824081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.824106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.575 [2024-07-10 15:50:23.824122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.575 [2024-07-10 15:50:23.824272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.575 [2024-07-10 15:50:23.824456] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.575 [2024-07-10 15:50:23.824476] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.575 [2024-07-10 15:50:23.824491] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.575 [2024-07-10 15:50:23.826528] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.575 [2024-07-10 15:50:23.835902] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.575 [2024-07-10 15:50:23.836217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.836411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.836445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.575 [2024-07-10 15:50:23.836489] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.575 [2024-07-10 15:50:23.836657] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.575 [2024-07-10 15:50:23.836853] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.575 [2024-07-10 15:50:23.836879] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.575 [2024-07-10 15:50:23.836894] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.575 [2024-07-10 15:50:23.838948] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.575 [2024-07-10 15:50:23.848177] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.575 [2024-07-10 15:50:23.848515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.848692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.575 [2024-07-10 15:50:23.848717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.576 [2024-07-10 15:50:23.848737] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.576 [2024-07-10 15:50:23.848885] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.576 [2024-07-10 15:50:23.849045] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.576 [2024-07-10 15:50:23.849065] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.576 [2024-07-10 15:50:23.849078] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.576 [2024-07-10 15:50:23.851113] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.576 [2024-07-10 15:50:23.860471] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.576 [2024-07-10 15:50:23.860763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-10 15:50:23.860922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-10 15:50:23.860946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.576 [2024-07-10 15:50:23.860962] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.576 [2024-07-10 15:50:23.861112] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.576 [2024-07-10 15:50:23.861288] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.576 [2024-07-10 15:50:23.861308] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.576 [2024-07-10 15:50:23.861321] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.576 [2024-07-10 15:50:23.863247] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.576 [2024-07-10 15:50:23.872797] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.576 [2024-07-10 15:50:23.873151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-10 15:50:23.873295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-10 15:50:23.873319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.576 [2024-07-10 15:50:23.873336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.576 [2024-07-10 15:50:23.873542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.576 [2024-07-10 15:50:23.873690] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.576 [2024-07-10 15:50:23.873724] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.576 [2024-07-10 15:50:23.873738] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.576 [2024-07-10 15:50:23.875813] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.576 [2024-07-10 15:50:23.885031] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.576 [2024-07-10 15:50:23.885338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-10 15:50:23.885511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-10 15:50:23.885538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.576 [2024-07-10 15:50:23.885554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.576 [2024-07-10 15:50:23.885737] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.576 [2024-07-10 15:50:23.885866] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.576 [2024-07-10 15:50:23.885885] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.576 [2024-07-10 15:50:23.885899] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.576 [2024-07-10 15:50:23.887929] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.576 [2024-07-10 15:50:23.897219] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.576 [2024-07-10 15:50:23.897634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-10 15:50:23.897807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-10 15:50:23.897832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.576 [2024-07-10 15:50:23.897848] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.576 [2024-07-10 15:50:23.897981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.576 [2024-07-10 15:50:23.898173] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.576 [2024-07-10 15:50:23.898193] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.576 [2024-07-10 15:50:23.898207] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.576 [2024-07-10 15:50:23.900219] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.576 [2024-07-10 15:50:23.909482] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.576 [2024-07-10 15:50:23.909811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-10 15:50:23.909988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-10 15:50:23.910012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.576 [2024-07-10 15:50:23.910029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.576 [2024-07-10 15:50:23.910193] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.576 [2024-07-10 15:50:23.910356] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.576 [2024-07-10 15:50:23.910376] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.576 [2024-07-10 15:50:23.910390] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.576 [2024-07-10 15:50:23.912494] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.576 [2024-07-10 15:50:23.921756] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.576 [2024-07-10 15:50:23.922095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.576 [2024-07-10 15:50:23.922258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.834 [2024-07-10 15:50:24.072102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.834 [2024-07-10 15:50:24.072131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.834 [2024-07-10 15:50:24.072278] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.834 [2024-07-10 15:50:24.072307] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:44.834 [2024-07-10 15:50:24.072490] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.835 [2024-07-10 15:50:24.072511] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.835 [2024-07-10 15:50:24.072524] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.835 [2024-07-10 15:50:24.074789] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.835 [2024-07-10 15:50:24.084501] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.835 [2024-07-10 15:50:24.084908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-10 15:50:24.085080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-10 15:50:24.085107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.835 [2024-07-10 15:50:24.085125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.835 [2024-07-10 15:50:24.085242] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.835 [2024-07-10 15:50:24.085445] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.835 [2024-07-10 15:50:24.085467] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.835 [2024-07-10 15:50:24.085481] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.835 [2024-07-10 15:50:24.087455] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.835 [2024-07-10 15:50:24.096726] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.835 [2024-07-10 15:50:24.097138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-10 15:50:24.097285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-10 15:50:24.097313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.835 [2024-07-10 15:50:24.097330] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.835 [2024-07-10 15:50:24.097524] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.835 [2024-07-10 15:50:24.097689] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.835 [2024-07-10 15:50:24.097711] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.835 [2024-07-10 15:50:24.097741] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.835 [2024-07-10 15:50:24.099686] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.835 [2024-07-10 15:50:24.109026] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.835 [2024-07-10 15:50:24.109334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-10 15:50:24.109506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-10 15:50:24.109540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.835 [2024-07-10 15:50:24.109559] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.835 [2024-07-10 15:50:24.109709] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.835 [2024-07-10 15:50:24.109871] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.835 [2024-07-10 15:50:24.109893] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.835 [2024-07-10 15:50:24.109907] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.835 [2024-07-10 15:50:24.112011] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.835 [2024-07-10 15:50:24.121518] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.835 [2024-07-10 15:50:24.121826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-10 15:50:24.121984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-10 15:50:24.122012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.835 [2024-07-10 15:50:24.122029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.835 [2024-07-10 15:50:24.122131] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.835 [2024-07-10 15:50:24.122338] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.835 [2024-07-10 15:50:24.122360] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.835 [2024-07-10 15:50:24.122374] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.835 [2024-07-10 15:50:24.124416] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.835 [2024-07-10 15:50:24.133849] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.835 [2024-07-10 15:50:24.134199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-10 15:50:24.134373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-10 15:50:24.134400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.835 [2024-07-10 15:50:24.134417] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.835 [2024-07-10 15:50:24.134575] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.835 [2024-07-10 15:50:24.134756] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.835 [2024-07-10 15:50:24.134778] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.835 [2024-07-10 15:50:24.134792] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.835 [2024-07-10 15:50:24.136939] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.835 [2024-07-10 15:50:24.146072] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.835 [2024-07-10 15:50:24.146408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-10 15:50:24.146573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-10 15:50:24.146601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.835 [2024-07-10 15:50:24.146623] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.835 [2024-07-10 15:50:24.146788] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.835 [2024-07-10 15:50:24.146934] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.835 [2024-07-10 15:50:24.146954] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.835 [2024-07-10 15:50:24.146968] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.835 [2024-07-10 15:50:24.149037] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.835 [2024-07-10 15:50:24.158453] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.835 [2024-07-10 15:50:24.158797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-10 15:50:24.159001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-10 15:50:24.159029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.835 [2024-07-10 15:50:24.159046] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.835 [2024-07-10 15:50:24.159164] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.835 [2024-07-10 15:50:24.159339] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.835 [2024-07-10 15:50:24.159360] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.835 [2024-07-10 15:50:24.159373] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.835 [2024-07-10 15:50:24.161483] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.835 [2024-07-10 15:50:24.170744] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.835 [2024-07-10 15:50:24.171028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-10 15:50:24.171194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.835 [2024-07-10 15:50:24.171222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.835 [2024-07-10 15:50:24.171239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.836 [2024-07-10 15:50:24.171388] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.836 [2024-07-10 15:50:24.171564] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.836 [2024-07-10 15:50:24.171588] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.836 [2024-07-10 15:50:24.171603] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.836 [2024-07-10 15:50:24.173799] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.836 [2024-07-10 15:50:24.182884] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.836 [2024-07-10 15:50:24.183208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-10 15:50:24.183347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-10 15:50:24.183373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.836 [2024-07-10 15:50:24.183390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.836 [2024-07-10 15:50:24.183554] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.836 [2024-07-10 15:50:24.183691] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.836 [2024-07-10 15:50:24.183714] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.836 [2024-07-10 15:50:24.183743] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.836 [2024-07-10 15:50:24.185828] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.836 [2024-07-10 15:50:24.195345] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.836 [2024-07-10 15:50:24.195744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-10 15:50:24.195882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-10 15:50:24.195909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.836 [2024-07-10 15:50:24.195926] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.836 [2024-07-10 15:50:24.196075] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.836 [2024-07-10 15:50:24.196266] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.836 [2024-07-10 15:50:24.196288] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.836 [2024-07-10 15:50:24.196302] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:44.836 [2024-07-10 15:50:24.198439] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:44.836 [2024-07-10 15:50:24.207941] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:44.836 [2024-07-10 15:50:24.208296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-10 15:50:24.208473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:44.836 [2024-07-10 15:50:24.208501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:44.836 [2024-07-10 15:50:24.208519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:44.836 [2024-07-10 15:50:24.208653] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:44.836 [2024-07-10 15:50:24.208838] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:44.836 [2024-07-10 15:50:24.208876] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:44.836 [2024-07-10 15:50:24.208902] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.095 [2024-07-10 15:50:24.210943] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.095 [2024-07-10 15:50:24.220355] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.095 [2024-07-10 15:50:24.220710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-10 15:50:24.220876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-10 15:50:24.220903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.095 [2024-07-10 15:50:24.220920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.095 [2024-07-10 15:50:24.221037] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.095 [2024-07-10 15:50:24.221206] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.095 [2024-07-10 15:50:24.221228] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.095 [2024-07-10 15:50:24.221241] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.095 [2024-07-10 15:50:24.223244] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.095 [2024-07-10 15:50:24.232467] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.095 [2024-07-10 15:50:24.232798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-10 15:50:24.232961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.095 [2024-07-10 15:50:24.232987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.096 [2024-07-10 15:50:24.233004] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.096 [2024-07-10 15:50:24.233137] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.096 [2024-07-10 15:50:24.233314] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.096 [2024-07-10 15:50:24.233335] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.096 [2024-07-10 15:50:24.233349] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.096 [2024-07-10 15:50:24.235476] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.096 [2024-07-10 15:50:24.244627] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.096 [2024-07-10 15:50:24.244928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-10 15:50:24.245104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-10 15:50:24.245130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.096 [2024-07-10 15:50:24.245146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.096 [2024-07-10 15:50:24.245325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.096 [2024-07-10 15:50:24.245558] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.096 [2024-07-10 15:50:24.245580] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.096 [2024-07-10 15:50:24.245595] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.096 [2024-07-10 15:50:24.247580] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.096 [2024-07-10 15:50:24.257088] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.096 [2024-07-10 15:50:24.257422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-10 15:50:24.257587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-10 15:50:24.257614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.096 [2024-07-10 15:50:24.257630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.096 [2024-07-10 15:50:24.257857] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.096 [2024-07-10 15:50:24.257985] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.096 [2024-07-10 15:50:24.258011] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.096 [2024-07-10 15:50:24.258025] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.096 [2024-07-10 15:50:24.260090] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.096 [2024-07-10 15:50:24.269392] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.096 [2024-07-10 15:50:24.269774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-10 15:50:24.269941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-10 15:50:24.269967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.096 [2024-07-10 15:50:24.269984] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.096 [2024-07-10 15:50:24.270147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.096 [2024-07-10 15:50:24.270307] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.096 [2024-07-10 15:50:24.270329] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.096 [2024-07-10 15:50:24.270343] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.096 [2024-07-10 15:50:24.272492] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.096 [2024-07-10 15:50:24.281717] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.096 [2024-07-10 15:50:24.282004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-10 15:50:24.282166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-10 15:50:24.282192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.096 [2024-07-10 15:50:24.282208] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.096 [2024-07-10 15:50:24.282358] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.096 [2024-07-10 15:50:24.282532] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.096 [2024-07-10 15:50:24.282554] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.096 [2024-07-10 15:50:24.282569] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.096 [2024-07-10 15:50:24.284457] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.096 [2024-07-10 15:50:24.294004] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.096 [2024-07-10 15:50:24.294372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-10 15:50:24.294555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-10 15:50:24.294582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.096 [2024-07-10 15:50:24.294599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.096 [2024-07-10 15:50:24.294794] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.096 [2024-07-10 15:50:24.294953] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.096 [2024-07-10 15:50:24.294973] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.096 [2024-07-10 15:50:24.294992] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.096 [2024-07-10 15:50:24.297045] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.096 [2024-07-10 15:50:24.306336] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.096 [2024-07-10 15:50:24.306637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-10 15:50:24.306798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-10 15:50:24.306825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.096 [2024-07-10 15:50:24.306841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.096 [2024-07-10 15:50:24.307021] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.096 [2024-07-10 15:50:24.307182] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.096 [2024-07-10 15:50:24.307203] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.096 [2024-07-10 15:50:24.307217] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.096 [2024-07-10 15:50:24.309292] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.096 [2024-07-10 15:50:24.318688] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.096 [2024-07-10 15:50:24.319005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-10 15:50:24.319172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-10 15:50:24.319199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.096 [2024-07-10 15:50:24.319215] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.096 [2024-07-10 15:50:24.319348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.096 [2024-07-10 15:50:24.319493] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.096 [2024-07-10 15:50:24.319516] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.096 [2024-07-10 15:50:24.319531] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.096 [2024-07-10 15:50:24.321766] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.096 [2024-07-10 15:50:24.331107] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.096 [2024-07-10 15:50:24.331410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-10 15:50:24.331559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-10 15:50:24.331585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.096 [2024-07-10 15:50:24.331602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.096 [2024-07-10 15:50:24.331767] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.096 [2024-07-10 15:50:24.331953] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.096 [2024-07-10 15:50:24.331975] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.096 [2024-07-10 15:50:24.331990] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.096 [2024-07-10 15:50:24.334096] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.096 [2024-07-10 15:50:24.343319] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.096 [2024-07-10 15:50:24.343662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-10 15:50:24.343810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.096 [2024-07-10 15:50:24.343837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.096 [2024-07-10 15:50:24.343853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.096 [2024-07-10 15:50:24.344017] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.096 [2024-07-10 15:50:24.344193] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.096 [2024-07-10 15:50:24.344214] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.096 [2024-07-10 15:50:24.344228] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.097 [2024-07-10 15:50:24.346464] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.097 [2024-07-10 15:50:24.355563] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.097 [2024-07-10 15:50:24.356005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-10 15:50:24.356193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-10 15:50:24.356220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.097 [2024-07-10 15:50:24.356237] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.097 [2024-07-10 15:50:24.356400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.097 [2024-07-10 15:50:24.356575] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.097 [2024-07-10 15:50:24.356598] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.097 [2024-07-10 15:50:24.356612] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.097 [2024-07-10 15:50:24.358714] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.097 [2024-07-10 15:50:24.367947] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.097 [2024-07-10 15:50:24.368298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-10 15:50:24.368493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-10 15:50:24.368521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.097 [2024-07-10 15:50:24.368538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.097 [2024-07-10 15:50:24.368638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.097 [2024-07-10 15:50:24.368834] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.097 [2024-07-10 15:50:24.368855] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.097 [2024-07-10 15:50:24.368869] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.097 [2024-07-10 15:50:24.370926] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.097 [2024-07-10 15:50:24.380094] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.097 [2024-07-10 15:50:24.380443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-10 15:50:24.380582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-10 15:50:24.380608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.097 [2024-07-10 15:50:24.380625] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.097 [2024-07-10 15:50:24.380759] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.097 [2024-07-10 15:50:24.380967] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.097 [2024-07-10 15:50:24.380988] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.097 [2024-07-10 15:50:24.381001] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.097 [2024-07-10 15:50:24.383140] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.097 [2024-07-10 15:50:24.392301] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.097 [2024-07-10 15:50:24.392599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-10 15:50:24.392771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-10 15:50:24.392798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.097 [2024-07-10 15:50:24.392814] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.097 [2024-07-10 15:50:24.392994] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.097 [2024-07-10 15:50:24.393154] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.097 [2024-07-10 15:50:24.393175] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.097 [2024-07-10 15:50:24.393189] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.097 [2024-07-10 15:50:24.395507] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.097 [2024-07-10 15:50:24.404524] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.097 [2024-07-10 15:50:24.404829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-10 15:50:24.404978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-10 15:50:24.405004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.097 [2024-07-10 15:50:24.405020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.097 [2024-07-10 15:50:24.405218] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.097 [2024-07-10 15:50:24.405347] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.097 [2024-07-10 15:50:24.405369] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.097 [2024-07-10 15:50:24.405383] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.097 [2024-07-10 15:50:24.407501] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.097 [2024-07-10 15:50:24.416903] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.097 [2024-07-10 15:50:24.417298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-10 15:50:24.417503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-10 15:50:24.417531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.097 [2024-07-10 15:50:24.417548] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.097 [2024-07-10 15:50:24.417682] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.097 [2024-07-10 15:50:24.417867] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.097 [2024-07-10 15:50:24.417887] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.097 [2024-07-10 15:50:24.417901] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.097 [2024-07-10 15:50:24.419991] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.097 [2024-07-10 15:50:24.429109] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.097 [2024-07-10 15:50:24.429445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-10 15:50:24.429611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-10 15:50:24.429639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.097 [2024-07-10 15:50:24.429656] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.097 [2024-07-10 15:50:24.429839] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.097 [2024-07-10 15:50:24.429988] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.097 [2024-07-10 15:50:24.430010] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.097 [2024-07-10 15:50:24.430024] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.097 [2024-07-10 15:50:24.432028] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.097 15:50:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:45.097 15:50:24 -- common/autotest_common.sh@852 -- # return 0 00:26:45.097 15:50:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:45.097 15:50:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:45.097 15:50:24 -- common/autotest_common.sh@10 -- # set +x 00:26:45.097 [2024-07-10 15:50:24.441512] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.097 [2024-07-10 15:50:24.441872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-10 15:50:24.442030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-10 15:50:24.442057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.097 [2024-07-10 15:50:24.442075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.097 [2024-07-10 15:50:24.442241] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.097 [2024-07-10 15:50:24.442442] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.097 [2024-07-10 15:50:24.442466] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.097 [2024-07-10 15:50:24.442485] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.097 [2024-07-10 15:50:24.444747] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.097 [2024-07-10 15:50:24.453861] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.097 [2024-07-10 15:50:24.454223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-10 15:50:24.454357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.097 [2024-07-10 15:50:24.454384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.097 [2024-07-10 15:50:24.454401] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.097 [2024-07-10 15:50:24.454574] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.097 [2024-07-10 15:50:24.454758] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.097 [2024-07-10 15:50:24.454779] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.097 [2024-07-10 15:50:24.454793] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.097 [2024-07-10 15:50:24.456970] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.097 15:50:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:45.097 15:50:24 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:45.097 15:50:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:45.098 15:50:24 -- common/autotest_common.sh@10 -- # set +x 00:26:45.098 [2024-07-10 15:50:24.462263] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:45.098 [2024-07-10 15:50:24.466059] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.098 [2024-07-10 15:50:24.466483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-10 15:50:24.466650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.098 [2024-07-10 15:50:24.466677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.098 [2024-07-10 15:50:24.466693] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.098 [2024-07-10 15:50:24.466844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.098 [2024-07-10 15:50:24.466994] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.098 [2024-07-10 15:50:24.467017] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.098 [2024-07-10 15:50:24.467032] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.098 15:50:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:45.098 15:50:24 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:45.098 15:50:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:45.098 15:50:24 -- common/autotest_common.sh@10 -- # set +x 00:26:45.098 [2024-07-10 15:50:24.469420] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.356 [2024-07-10 15:50:24.478440] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.356 [2024-07-10 15:50:24.478852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-07-10 15:50:24.479018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-07-10 15:50:24.479045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.356 [2024-07-10 15:50:24.479061] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.356 [2024-07-10 15:50:24.479272] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.356 [2024-07-10 15:50:24.479485] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.356 [2024-07-10 15:50:24.479516] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.356 [2024-07-10 15:50:24.479532] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.356 [2024-07-10 15:50:24.481654] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.356 [2024-07-10 15:50:24.490777] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.356 [2024-07-10 15:50:24.491253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-07-10 15:50:24.491441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-07-10 15:50:24.491478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.356 [2024-07-10 15:50:24.491497] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.356 [2024-07-10 15:50:24.491684] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.356 [2024-07-10 15:50:24.491918] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.356 [2024-07-10 15:50:24.491941] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.356 [2024-07-10 15:50:24.491957] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.356 [2024-07-10 15:50:24.494021] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.356 [2024-07-10 15:50:24.503010] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.356 [2024-07-10 15:50:24.503486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-07-10 15:50:24.503695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-07-10 15:50:24.503723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.356 [2024-07-10 15:50:24.503743] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.356 [2024-07-10 15:50:24.503932] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.356 [2024-07-10 15:50:24.504079] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.356 [2024-07-10 15:50:24.504101] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.356 [2024-07-10 15:50:24.504118] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.356 Malloc0 00:26:45.356 15:50:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:45.356 15:50:24 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:45.356 15:50:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:45.356 15:50:24 -- common/autotest_common.sh@10 -- # set +x 00:26:45.356 [2024-07-10 15:50:24.506290] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.356 15:50:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:45.356 15:50:24 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:45.356 15:50:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:45.356 15:50:24 -- common/autotest_common.sh@10 -- # set +x 00:26:45.356 [2024-07-10 15:50:24.515246] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.356 [2024-07-10 15:50:24.515594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-07-10 15:50:24.515770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:45.356 [2024-07-10 15:50:24.515798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x5ce400 with addr=10.0.0.2, port=4420 00:26:45.356 [2024-07-10 15:50:24.515822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5ce400 is same with the state(5) to be set 00:26:45.356 [2024-07-10 15:50:24.515971] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5ce400 (9): Bad file descriptor 00:26:45.356 [2024-07-10 15:50:24.516163] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:26:45.356 [2024-07-10 15:50:24.516185] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:26:45.356 [2024-07-10 15:50:24.516199] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:45.356 [2024-07-10 15:50:24.518188] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:45.356 15:50:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:45.356 15:50:24 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:45.356 15:50:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:45.356 15:50:24 -- common/autotest_common.sh@10 -- # set +x 00:26:45.356 [2024-07-10 15:50:24.524855] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.356 [2024-07-10 15:50:24.527602] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:45.356 15:50:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:45.356 15:50:24 -- host/bdevperf.sh@38 -- # wait 2230204 00:26:45.356 [2024-07-10 15:50:24.559884] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:55.319 00:26:55.319 Latency(us) 00:26:55.319 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.319 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:55.319 Verification LBA range: start 0x0 length 0x4000 00:26:55.319 Nvme1n1 : 15.01 9383.29 36.65 15011.26 0.00 5232.07 837.40 152237.70 00:26:55.319 =================================================================================================================== 00:26:55.319 Total : 9383.29 36.65 15011.26 0.00 5232.07 837.40 152237.70 00:26:55.319 15:50:33 -- host/bdevperf.sh@39 -- # sync 00:26:55.319 15:50:33 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:55.319 15:50:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:55.319 15:50:33 -- common/autotest_common.sh@10 -- # set +x 00:26:55.319 15:50:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:55.319 15:50:33 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:55.319 15:50:33 -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:55.319 15:50:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:55.319 15:50:33 -- nvmf/common.sh@116 -- # sync 00:26:55.319 15:50:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:55.319 15:50:33 -- nvmf/common.sh@119 -- # set +e 00:26:55.319 15:50:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:55.319 15:50:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:55.319 rmmod nvme_tcp 00:26:55.319 rmmod nvme_fabrics 00:26:55.319 rmmod nvme_keyring 00:26:55.319 15:50:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:55.319 15:50:33 -- nvmf/common.sh@123 -- # set -e 00:26:55.319 15:50:33 -- nvmf/common.sh@124 -- # return 0 00:26:55.319 15:50:33 -- nvmf/common.sh@477 -- # '[' -n 2230894 ']' 00:26:55.319 15:50:33 -- nvmf/common.sh@478 -- # killprocess 2230894 00:26:55.319 15:50:33 -- common/autotest_common.sh@926 -- # '[' -z 2230894 ']' 00:26:55.319 15:50:33 -- common/autotest_common.sh@930 -- # kill -0 2230894 00:26:55.319 15:50:33 -- common/autotest_common.sh@931 -- # uname 00:26:55.319 15:50:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:55.319 15:50:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2230894 00:26:55.319 15:50:33 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:26:55.319 15:50:33 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:26:55.319 15:50:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2230894' 00:26:55.319 killing process with pid 2230894 00:26:55.319 15:50:33 -- common/autotest_common.sh@945 -- # kill 2230894 00:26:55.319 15:50:33 -- common/autotest_common.sh@950 -- # wait 2230894 00:26:55.319 15:50:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:26:55.319 15:50:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:55.319 15:50:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:55.319 15:50:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:55.319 15:50:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:55.319 15:50:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:55.319 15:50:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:55.319 15:50:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.312 15:50:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:26:56.312 00:26:56.312 real 0m23.320s 00:26:56.312 user 0m58.757s 00:26:56.312 sys 0m5.837s 00:26:56.312 15:50:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:56.312 15:50:35 -- common/autotest_common.sh@10 -- # set +x 00:26:56.312 ************************************ 00:26:56.312 END TEST nvmf_bdevperf 00:26:56.312 ************************************ 00:26:56.312 15:50:35 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:56.312 15:50:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:26:56.312 15:50:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:56.312 15:50:35 -- common/autotest_common.sh@10 -- # set +x 00:26:56.312 ************************************ 00:26:56.312 START TEST nvmf_target_disconnect 00:26:56.312 ************************************ 00:26:56.312 15:50:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:56.312 * Looking for test storage... 00:26:56.312 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:56.312 15:50:35 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:56.312 15:50:35 -- nvmf/common.sh@7 -- # uname -s 00:26:56.312 15:50:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:56.312 15:50:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:56.312 15:50:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:56.312 15:50:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:56.312 15:50:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:56.312 15:50:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:56.312 15:50:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:56.312 15:50:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:56.312 15:50:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:56.312 15:50:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:56.312 15:50:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:56.312 15:50:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:56.312 15:50:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:56.312 15:50:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:56.312 15:50:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:56.312 15:50:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:56.606 15:50:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:56.606 15:50:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:56.606 15:50:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:56.606 15:50:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.606 15:50:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.606 15:50:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.606 15:50:35 -- paths/export.sh@5 -- # export PATH 00:26:56.606 15:50:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:56.606 15:50:35 -- nvmf/common.sh@46 -- # : 0 00:26:56.606 15:50:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:56.606 15:50:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:56.606 15:50:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:56.606 15:50:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:56.606 15:50:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:56.606 15:50:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:56.606 15:50:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:56.606 15:50:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:56.606 15:50:35 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:56.606 15:50:35 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:56.606 15:50:35 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:56.606 15:50:35 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:26:56.606 15:50:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:56.606 15:50:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:56.606 15:50:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:56.606 15:50:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:56.606 15:50:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:56.606 15:50:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:56.606 15:50:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:56.606 15:50:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:56.606 15:50:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:26:56.606 15:50:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:26:56.606 15:50:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:26:56.606 15:50:35 -- common/autotest_common.sh@10 -- # set +x 00:26:58.510 15:50:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:58.510 15:50:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:26:58.510 15:50:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:26:58.510 15:50:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:26:58.510 15:50:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:26:58.510 15:50:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:26:58.510 15:50:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:26:58.510 15:50:37 -- nvmf/common.sh@294 -- # net_devs=() 00:26:58.510 15:50:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:26:58.510 15:50:37 -- nvmf/common.sh@295 -- # e810=() 00:26:58.510 15:50:37 -- nvmf/common.sh@295 -- # local -ga e810 00:26:58.510 15:50:37 -- nvmf/common.sh@296 -- # x722=() 00:26:58.510 15:50:37 -- nvmf/common.sh@296 -- # local -ga x722 00:26:58.510 15:50:37 -- nvmf/common.sh@297 -- # mlx=() 00:26:58.510 15:50:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:26:58.510 15:50:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:58.510 15:50:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:58.510 15:50:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:58.510 15:50:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:58.510 15:50:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:58.510 15:50:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:58.510 15:50:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:58.510 15:50:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:58.510 15:50:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:58.510 15:50:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:58.510 15:50:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:58.510 15:50:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:26:58.510 15:50:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:26:58.510 15:50:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:26:58.510 15:50:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:26:58.510 15:50:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:26:58.510 15:50:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:26:58.510 15:50:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:58.510 15:50:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:58.510 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:58.510 15:50:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:58.510 15:50:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:58.510 15:50:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.510 15:50:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.510 15:50:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:58.510 15:50:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:26:58.510 15:50:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:58.510 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:58.510 15:50:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:26:58.510 15:50:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:26:58.510 15:50:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:58.510 15:50:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:58.510 15:50:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:26:58.510 15:50:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:26:58.510 15:50:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:26:58.510 15:50:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:26:58.510 15:50:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:58.510 15:50:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.510 15:50:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:58.510 15:50:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.510 15:50:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:58.510 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:58.510 15:50:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.510 15:50:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:26:58.510 15:50:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:58.510 15:50:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:26:58.510 15:50:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:58.510 15:50:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:58.510 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:58.510 15:50:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:26:58.510 15:50:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:26:58.510 15:50:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:26:58.510 15:50:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:26:58.510 15:50:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:26:58.510 15:50:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:26:58.510 15:50:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:58.510 15:50:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:58.510 15:50:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:58.510 15:50:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:26:58.510 15:50:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:58.510 15:50:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:58.510 15:50:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:26:58.510 15:50:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:58.510 15:50:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:58.510 15:50:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:26:58.510 15:50:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:26:58.510 15:50:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:26:58.510 15:50:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:58.510 15:50:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:58.510 15:50:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:58.511 15:50:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:26:58.511 15:50:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:58.511 15:50:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:58.511 15:50:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:58.511 15:50:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:26:58.511 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:58.511 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:26:58.511 00:26:58.511 --- 10.0.0.2 ping statistics --- 00:26:58.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.511 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:26:58.511 15:50:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:58.511 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:58.511 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:26:58.511 00:26:58.511 --- 10.0.0.1 ping statistics --- 00:26:58.511 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:58.511 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:26:58.511 15:50:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:58.511 15:50:37 -- nvmf/common.sh@410 -- # return 0 00:26:58.511 15:50:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:26:58.511 15:50:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:58.511 15:50:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:58.511 15:50:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:58.511 15:50:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:58.511 15:50:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:58.511 15:50:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:58.511 15:50:37 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:58.511 15:50:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:58.511 15:50:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:58.511 15:50:37 -- common/autotest_common.sh@10 -- # set +x 00:26:58.511 ************************************ 00:26:58.511 START TEST nvmf_target_disconnect_tc1 00:26:58.511 ************************************ 00:26:58.511 15:50:37 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:26:58.511 15:50:37 -- host/target_disconnect.sh@32 -- # set +e 00:26:58.511 15:50:37 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:58.511 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.511 [2024-07-10 15:50:37.780059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.511 [2024-07-10 15:50:37.780292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.511 [2024-07-10 15:50:37.780326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xbb0920 with addr=10.0.0.2, port=4420 00:26:58.511 [2024-07-10 15:50:37.780360] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:58.511 [2024-07-10 15:50:37.780387] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:58.511 [2024-07-10 15:50:37.780403] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:26:58.511 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:58.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:58.511 Initializing NVMe Controllers 00:26:58.511 15:50:37 -- host/target_disconnect.sh@33 -- # trap - ERR 00:26:58.511 15:50:37 -- host/target_disconnect.sh@33 -- # print_backtrace 00:26:58.511 15:50:37 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:26:58.511 15:50:37 -- common/autotest_common.sh@1132 -- # return 0 00:26:58.511 15:50:37 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:26:58.511 15:50:37 -- host/target_disconnect.sh@41 -- # set -e 00:26:58.511 00:26:58.511 real 0m0.099s 00:26:58.511 user 0m0.034s 00:26:58.511 sys 0m0.063s 00:26:58.511 15:50:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:58.511 15:50:37 -- common/autotest_common.sh@10 -- # set +x 00:26:58.511 ************************************ 00:26:58.511 END TEST nvmf_target_disconnect_tc1 00:26:58.511 ************************************ 00:26:58.511 15:50:37 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:58.511 15:50:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:58.511 15:50:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:58.511 15:50:37 -- common/autotest_common.sh@10 -- # set +x 00:26:58.511 ************************************ 00:26:58.511 START TEST nvmf_target_disconnect_tc2 00:26:58.511 ************************************ 00:26:58.511 15:50:37 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:26:58.511 15:50:37 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:26:58.511 15:50:37 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:58.511 15:50:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:58.511 15:50:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:58.511 15:50:37 -- common/autotest_common.sh@10 -- # set +x 00:26:58.511 15:50:37 -- nvmf/common.sh@469 -- # nvmfpid=2234089 00:26:58.511 15:50:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:58.511 15:50:37 -- nvmf/common.sh@470 -- # waitforlisten 2234089 00:26:58.511 15:50:37 -- common/autotest_common.sh@819 -- # '[' -z 2234089 ']' 00:26:58.511 15:50:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:58.511 15:50:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:58.511 15:50:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:58.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:58.511 15:50:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:58.511 15:50:37 -- common/autotest_common.sh@10 -- # set +x 00:26:58.511 [2024-07-10 15:50:37.868510] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:58.511 [2024-07-10 15:50:37.868584] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:58.770 EAL: No free 2048 kB hugepages reported on node 1 00:26:58.770 [2024-07-10 15:50:37.937161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:58.770 [2024-07-10 15:50:38.047262] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:58.770 [2024-07-10 15:50:38.047401] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:58.770 [2024-07-10 15:50:38.047418] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:58.770 [2024-07-10 15:50:38.047439] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:58.770 [2024-07-10 15:50:38.047541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:26:58.770 [2024-07-10 15:50:38.047594] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:26:58.770 [2024-07-10 15:50:38.047616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:26:58.770 [2024-07-10 15:50:38.047622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:26:59.704 15:50:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:59.704 15:50:38 -- common/autotest_common.sh@852 -- # return 0 00:26:59.704 15:50:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:59.704 15:50:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:59.704 15:50:38 -- common/autotest_common.sh@10 -- # set +x 00:26:59.704 15:50:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:59.704 15:50:38 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:59.704 15:50:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.704 15:50:38 -- common/autotest_common.sh@10 -- # set +x 00:26:59.704 Malloc0 00:26:59.704 15:50:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.704 15:50:38 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:59.704 15:50:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.704 15:50:38 -- common/autotest_common.sh@10 -- # set +x 00:26:59.704 [2024-07-10 15:50:38.855960] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:59.704 15:50:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.704 15:50:38 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:59.704 15:50:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.704 15:50:38 -- common/autotest_common.sh@10 -- # set +x 00:26:59.704 15:50:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.704 15:50:38 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:59.704 15:50:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.704 15:50:38 -- common/autotest_common.sh@10 -- # set +x 00:26:59.704 15:50:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.704 15:50:38 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:59.704 15:50:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.704 15:50:38 -- common/autotest_common.sh@10 -- # set +x 00:26:59.704 [2024-07-10 15:50:38.884220] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:59.704 15:50:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.704 15:50:38 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:59.704 15:50:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:59.704 15:50:38 -- common/autotest_common.sh@10 -- # set +x 00:26:59.704 15:50:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:59.704 15:50:38 -- host/target_disconnect.sh@50 -- # reconnectpid=2234248 00:26:59.704 15:50:38 -- host/target_disconnect.sh@52 -- # sleep 2 00:26:59.704 15:50:38 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:59.704 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.615 15:50:40 -- host/target_disconnect.sh@53 -- # kill -9 2234089 00:27:01.615 15:50:40 -- host/target_disconnect.sh@55 -- # sleep 2 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Write completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Write completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Write completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Write completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Write completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Write completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Write completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Write completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Write completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Write completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Write completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Write completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Write completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 [2024-07-10 15:50:40.908822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.615 Read completed with error (sct=0, sc=8) 00:27:01.615 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Write completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Write completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Write completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Write completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Write completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Write completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Write completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 [2024-07-10 15:50:40.909153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Read completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Write completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.616 Write completed with error (sct=0, sc=8) 00:27:01.616 starting I/O failed 00:27:01.617 Write completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Read completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Write completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Read completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Read completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Read completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Write completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Write completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Write completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Read completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Write completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Write completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Write completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Write completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Read completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Read completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Read completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Write completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 [2024-07-10 15:50:40.909458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:01.617 Read completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Read completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Read completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Read completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Read completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Read completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Read completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Read completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Read completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Read completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.617 Read completed with error (sct=0, sc=8) 00:27:01.617 starting I/O failed 00:27:01.618 Read completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 Read completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 Write completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 Read completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 Write completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 Write completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 Write completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 Write completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 Write completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 Read completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 Read completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 Read completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 Read completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 Write completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 Write completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 Read completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 Write completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 Write completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 Read completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 Read completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 Write completed with error (sct=0, sc=8) 00:27:01.618 starting I/O failed 00:27:01.618 [2024-07-10 15:50:40.909772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:01.618 [2024-07-10 15:50:40.910005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.618 [2024-07-10 15:50:40.910217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.618 [2024-07-10 15:50:40.910245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.618 qpair failed and we were unable to recover it. 00:27:01.618 [2024-07-10 15:50:40.910419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.618 [2024-07-10 15:50:40.910573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.618 [2024-07-10 15:50:40.910601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.618 qpair failed and we were unable to recover it. 00:27:01.618 [2024-07-10 15:50:40.910772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.618 [2024-07-10 15:50:40.910903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.618 [2024-07-10 15:50:40.910931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.618 qpair failed and we were unable to recover it. 00:27:01.618 [2024-07-10 15:50:40.911149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.618 [2024-07-10 15:50:40.911385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.618 [2024-07-10 15:50:40.911431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.618 qpair failed and we were unable to recover it. 00:27:01.619 [2024-07-10 15:50:40.911571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.619 [2024-07-10 15:50:40.911722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.619 [2024-07-10 15:50:40.911748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.619 qpair failed and we were unable to recover it. 00:27:01.619 [2024-07-10 15:50:40.911949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.619 [2024-07-10 15:50:40.912139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.619 [2024-07-10 15:50:40.912173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.619 qpair failed and we were unable to recover it. 00:27:01.619 [2024-07-10 15:50:40.912315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.619 [2024-07-10 15:50:40.912512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.619 [2024-07-10 15:50:40.912539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.619 qpair failed and we were unable to recover it. 00:27:01.619 [2024-07-10 15:50:40.912683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.619 [2024-07-10 15:50:40.912842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.619 [2024-07-10 15:50:40.912868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.619 qpair failed and we were unable to recover it. 00:27:01.619 [2024-07-10 15:50:40.913142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.619 [2024-07-10 15:50:40.913316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.619 [2024-07-10 15:50:40.913342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.619 qpair failed and we were unable to recover it. 00:27:01.619 [2024-07-10 15:50:40.913514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.619 [2024-07-10 15:50:40.913656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.619 [2024-07-10 15:50:40.913693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.619 qpair failed and we were unable to recover it. 00:27:01.619 [2024-07-10 15:50:40.913901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.619 [2024-07-10 15:50:40.914121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.619 [2024-07-10 15:50:40.914168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.619 qpair failed and we were unable to recover it. 00:27:01.619 [2024-07-10 15:50:40.914325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.619 [2024-07-10 15:50:40.914503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.619 [2024-07-10 15:50:40.914530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.619 qpair failed and we were unable to recover it. 00:27:01.619 [2024-07-10 15:50:40.914794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.624 [2024-07-10 15:50:40.914970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.624 [2024-07-10 15:50:40.914996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.624 qpair failed and we were unable to recover it. 00:27:01.624 [2024-07-10 15:50:40.915192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.624 [2024-07-10 15:50:40.915358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.624 [2024-07-10 15:50:40.915385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.624 qpair failed and we were unable to recover it. 00:27:01.624 [2024-07-10 15:50:40.915542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.624 [2024-07-10 15:50:40.915729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.624 [2024-07-10 15:50:40.915755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.624 qpair failed and we were unable to recover it. 00:27:01.624 [2024-07-10 15:50:40.915938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.624 [2024-07-10 15:50:40.916155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.624 [2024-07-10 15:50:40.916186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.624 qpair failed and we were unable to recover it. 00:27:01.624 [2024-07-10 15:50:40.916420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.624 [2024-07-10 15:50:40.916575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.624 [2024-07-10 15:50:40.916601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.624 qpair failed and we were unable to recover it. 00:27:01.624 [2024-07-10 15:50:40.916785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.917084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.917133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.625 qpair failed and we were unable to recover it. 00:27:01.625 [2024-07-10 15:50:40.917324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.917561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.917589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.625 qpair failed and we were unable to recover it. 00:27:01.625 [2024-07-10 15:50:40.917736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.917923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.917964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.625 qpair failed and we were unable to recover it. 00:27:01.625 [2024-07-10 15:50:40.918156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.918332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.918359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.625 qpair failed and we were unable to recover it. 00:27:01.625 [2024-07-10 15:50:40.918496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.918643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.918670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.625 qpair failed and we were unable to recover it. 00:27:01.625 [2024-07-10 15:50:40.918856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.919140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.919193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.625 qpair failed and we were unable to recover it. 00:27:01.625 [2024-07-10 15:50:40.919475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.919616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.919642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.625 qpair failed and we were unable to recover it. 00:27:01.625 [2024-07-10 15:50:40.919837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.920104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.920133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.625 qpair failed and we were unable to recover it. 00:27:01.625 [2024-07-10 15:50:40.920382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.920561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.920592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.625 qpair failed and we were unable to recover it. 00:27:01.625 [2024-07-10 15:50:40.920731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.920933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.920960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.625 qpair failed and we were unable to recover it. 00:27:01.625 [2024-07-10 15:50:40.921158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.921312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.921352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.625 qpair failed and we were unable to recover it. 00:27:01.625 [2024-07-10 15:50:40.921583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.921756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.921783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.625 qpair failed and we were unable to recover it. 00:27:01.625 [2024-07-10 15:50:40.921956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.922122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.922165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.625 qpair failed and we were unable to recover it. 00:27:01.625 [2024-07-10 15:50:40.922384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.922555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.922581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.625 qpair failed and we were unable to recover it. 00:27:01.625 [2024-07-10 15:50:40.922775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.922988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.923033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.625 qpair failed and we were unable to recover it. 00:27:01.625 [2024-07-10 15:50:40.923278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.923478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.923505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.625 qpair failed and we were unable to recover it. 00:27:01.625 [2024-07-10 15:50:40.923671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.923833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.625 [2024-07-10 15:50:40.923875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.625 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.924140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.924298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.924324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.924497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.924689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.924716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.924943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.925270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.925322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.925497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.925683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.925710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.925901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.926163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.926188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.926317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.926495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.926521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.926797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.926975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.927000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.927139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.927290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.927318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.927514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.927673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.927701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.927836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.928001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.928027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.928188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.928838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.928865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.929043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.929215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.929243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.929408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.929558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.929585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.929754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.929905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.929931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.930090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.930220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.930246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.930414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.930677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.930721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.930951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.931259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.931321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.931537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.931747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.931790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.931982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.932179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.932223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.932406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.932572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.932599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.932757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.932903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.932930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.933080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.933242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.933284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.933455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.933646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.933673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.933853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.934013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.934040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.934213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.934397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.934430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.934600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.934765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.934805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.935021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.935228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.935254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.935413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.935647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.935673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.935899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.936111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.936138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.936320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.936510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.936538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.936791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.936956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.936986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.937152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.937396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.937443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.937613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.937778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.937806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.938021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.938261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.938287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.626 [2024-07-10 15:50:40.938475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.938706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.626 [2024-07-10 15:50:40.938751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.626 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.938946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.939116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.939156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.939351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.939542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.939569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.939748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.939959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.939989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.940190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.940384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.940410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.940629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.940793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.940837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.940992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.941217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.941243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.941440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.941627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.941671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.941930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.942164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.942208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.942367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.942530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.942558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.942801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.942996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.943037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.943196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.943374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.943400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.943558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.943765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.943795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.943997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.944195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.944236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.944435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.944621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.944647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.944811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.945002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.945043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.945237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.945481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.945509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.945674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.945883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.945910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.946163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.946333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.946360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.946555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.946756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.946783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.946943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.947111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.947137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.947324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.947485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.947512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.947752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.947940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.947967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.948127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.948290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.948318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.948477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.948667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.948694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.948850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.949057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.949087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.949266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.949435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.949463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.949655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.949829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.949857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.950048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.950211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.627 [2024-07-10 15:50:40.950239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.627 qpair failed and we were unable to recover it. 00:27:01.627 [2024-07-10 15:50:40.950405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.950590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.950617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.950779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.950945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.950972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.951155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.951342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.951369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.951528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.951691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.951719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.951906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.952088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.952114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.952289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.952472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.952499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.952663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.952827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.952854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.952989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.953151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.953179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.953309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.953520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.953565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.953744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.953877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.953903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.954064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.954231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.954259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.954508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.954705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.954731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.954893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.955058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.955086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.955258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.955420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.955454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.955624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.955797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.955823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.956016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.956171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.956197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.956359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.956539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.956583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.956788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.956979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.957005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.957187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.957378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.957407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.957613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.957814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.957860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.958884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.959214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.959267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.959469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.959724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.959768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.959968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.960185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.960212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.960408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.960572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.960598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.960808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.960985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.961037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.961253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.961397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.961431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.961590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.961755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.961797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.961986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.962190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.962232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.962365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.962540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.962566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.962747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.962909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.962934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.963107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.963270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.963295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.963436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.963604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.963630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.628 qpair failed and we were unable to recover it. 00:27:01.628 [2024-07-10 15:50:40.963796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.964005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.628 [2024-07-10 15:50:40.964030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.964217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.964376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.964401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.964586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.964791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.964817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.965038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.965208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.965234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.965450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.965630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.965673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.965821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.966083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.966126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.966386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.966591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.966618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.966809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.967083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.967126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.967321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.967459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.967486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.967674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.967889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.967914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.968110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.968262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.968287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.968421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.968639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.968683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.968845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.969034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.969060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.969246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.969410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.969441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.969595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.969800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.969844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.970096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.970309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.970334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.970505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.970684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.970713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.970926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.971216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.971244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.971373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.971565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.971610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.971775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.971967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.972008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.972177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.972380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.972406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.972612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.972810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.972853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.973039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.973207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.973252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.973430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.973639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.973683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.973881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.974122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.974163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.974358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.974522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.974549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.974777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.974926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.974952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.975142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.975341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.975371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.975540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.975701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.975727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.975899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.976111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.976155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.976348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.976520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.976548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.976711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.976925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.976950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.977155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.977308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.977349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.977490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.977663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.977690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.977874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.978070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.978111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.978352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.978550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.978577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.978756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.978919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.978946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.979136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.979317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.979347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.979571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.979776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.979819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.979967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.980181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.980207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.980394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.980621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.980665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.980882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.981037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.981063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.981322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.981507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.981551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.981718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.981935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.981962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.982135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.982264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.982290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.629 qpair failed and we were unable to recover it. 00:27:01.629 [2024-07-10 15:50:40.982435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.629 [2024-07-10 15:50:40.982574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.630 [2024-07-10 15:50:40.982601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.630 qpair failed and we were unable to recover it. 00:27:01.630 [2024-07-10 15:50:40.982765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.630 [2024-07-10 15:50:40.982952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.630 [2024-07-10 15:50:40.982993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.630 qpair failed and we were unable to recover it. 00:27:01.630 [2024-07-10 15:50:40.983187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.630 [2024-07-10 15:50:40.983516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.630 [2024-07-10 15:50:40.983546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.630 qpair failed and we were unable to recover it. 00:27:01.630 [2024-07-10 15:50:40.983792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.630 [2024-07-10 15:50:40.984000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.630 [2024-07-10 15:50:40.984025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.630 qpair failed and we were unable to recover it. 00:27:01.630 [2024-07-10 15:50:40.984216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.630 [2024-07-10 15:50:40.984360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.630 [2024-07-10 15:50:40.984386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.630 qpair failed and we were unable to recover it. 00:27:01.630 [2024-07-10 15:50:40.984544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.630 [2024-07-10 15:50:40.984713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.630 [2024-07-10 15:50:40.984740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.630 qpair failed and we were unable to recover it. 00:27:01.630 [2024-07-10 15:50:40.984904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.985076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.985102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.893 qpair failed and we were unable to recover it. 00:27:01.893 [2024-07-10 15:50:40.985250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.985417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.985448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.893 qpair failed and we were unable to recover it. 00:27:01.893 [2024-07-10 15:50:40.985692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.985932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.985961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.893 qpair failed and we were unable to recover it. 00:27:01.893 [2024-07-10 15:50:40.986175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.986306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.986332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.893 qpair failed and we were unable to recover it. 00:27:01.893 [2024-07-10 15:50:40.986482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.986693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.986737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.893 qpair failed and we were unable to recover it. 00:27:01.893 [2024-07-10 15:50:40.986922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.987164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.987190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.893 qpair failed and we were unable to recover it. 00:27:01.893 [2024-07-10 15:50:40.987353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.987514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.987543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.893 qpair failed and we were unable to recover it. 00:27:01.893 [2024-07-10 15:50:40.987737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.987898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.987926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.893 qpair failed and we were unable to recover it. 00:27:01.893 [2024-07-10 15:50:40.988113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.988298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.988324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.893 qpair failed and we were unable to recover it. 00:27:01.893 [2024-07-10 15:50:40.988497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.988671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.988697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.893 qpair failed and we were unable to recover it. 00:27:01.893 [2024-07-10 15:50:40.988855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.989010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.989055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.893 qpair failed and we were unable to recover it. 00:27:01.893 [2024-07-10 15:50:40.989185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.989359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.989385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.893 qpair failed and we were unable to recover it. 00:27:01.893 [2024-07-10 15:50:40.989598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.989795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.989838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.893 qpair failed and we were unable to recover it. 00:27:01.893 [2024-07-10 15:50:40.990017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.990222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.990248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.893 qpair failed and we were unable to recover it. 00:27:01.893 [2024-07-10 15:50:40.990451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.990635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.990678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.893 qpair failed and we were unable to recover it. 00:27:01.893 [2024-07-10 15:50:40.990863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.991097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.991141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.893 qpair failed and we were unable to recover it. 00:27:01.893 [2024-07-10 15:50:40.991277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.991485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.991515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.893 qpair failed and we were unable to recover it. 00:27:01.893 [2024-07-10 15:50:40.991684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.991929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.991955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.893 qpair failed and we were unable to recover it. 00:27:01.893 [2024-07-10 15:50:40.992212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.992393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.992419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.893 qpair failed and we were unable to recover it. 00:27:01.893 [2024-07-10 15:50:40.992596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.992774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.893 [2024-07-10 15:50:40.992817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.893 qpair failed and we were unable to recover it. 00:27:01.893 [2024-07-10 15:50:40.992979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.993222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.993248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:40.993436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.993599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.993625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:40.993866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.994127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.994169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:40.994359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.994527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.994554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:40.994739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.994915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.994941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:40.995109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.995278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.995306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:40.995497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.995658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.995683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:40.995869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.996037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.996080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:40.996239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.996401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.996434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:40.996599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.996731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.996758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:40.996958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.997197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.997223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:40.997411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.997607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.997633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:40.997775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.997953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.997996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:40.998129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.998292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.998317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:40.998460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.998645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.998688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:40.998875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.999024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.999052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:40.999243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.999406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.999438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:40.999608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.999803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:40.999846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:41.000018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.000176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.000203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:41.000380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.000566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.000595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:41.000804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.001006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.001050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:41.001211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.001377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.001403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:41.001592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.001796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.001839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:41.002032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.002237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.002263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:41.002450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.002613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.002639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:41.002842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.003068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.003111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:41.003297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.003437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.003464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:41.003656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.003964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.003991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:41.004154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.004320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.004346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:41.004540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.004725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.004768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.894 [2024-07-10 15:50:41.004920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.005121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.894 [2024-07-10 15:50:41.005163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.894 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.005351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.005571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.005614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.005816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.005954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.005982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.006117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.006310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.006336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.006527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.006727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.006770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.006927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.007106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.007132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.007325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.007455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.007492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.007657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.007827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.007854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.008019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.008178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.008205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.008368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.008563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.008590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.008776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.009020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.009047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.009231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.009388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.009413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.009579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.009745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.009774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.010008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.010182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.010208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.010375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.010542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.010568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.010786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.011091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.011135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.011349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.011522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.011549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.011714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.011932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.011976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.012163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.012346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.012373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.012587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.012782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.012825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.013074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.013288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.013315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.013533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.013809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.013856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.014047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.014268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.014293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.014468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.014626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.014667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.014860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.015086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.015130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.015337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.015564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.015608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.015799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.016001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.016062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.016215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.016415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.016447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.016678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.016924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.016951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.017127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.017299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.017325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.017512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.017715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.017742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.895 qpair failed and we were unable to recover it. 00:27:01.895 [2024-07-10 15:50:41.017948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.018195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.895 [2024-07-10 15:50:41.018222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.018378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.018568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.018612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.018770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.018974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.019022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.019153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.019339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.019365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.019520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.019789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.019832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.020051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.020232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.020259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.020423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.020643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.020688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.020846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.021075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.021119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.021256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.021443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.021487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.021681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.021830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.021855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.022015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.022188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.022215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.022379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.022590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.022618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.022790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.022974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.023017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.023209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.023405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.023438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.023639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.023806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.023832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.024047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.024200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.024226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.024363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.024580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.024624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.024801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.025142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.025194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.025352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.025568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.025612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.025847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.025979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.026006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.026163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.026330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.026357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.026553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.026768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.026795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.026995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.027161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.027187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.027370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.027588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.027632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.027832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.027997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.028025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.028200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.028387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.028414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.028592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.028806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.028852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.029037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.029219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.029246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.029438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.029615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.029641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.029837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.029991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.030018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.030195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.030376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.896 [2024-07-10 15:50:41.030404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.896 qpair failed and we were unable to recover it. 00:27:01.896 [2024-07-10 15:50:41.030611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.030811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.030855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.031050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.031294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.031321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.031449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.031615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.031641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.031803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.032025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.032051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.032215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.032405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.032436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.032610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.032777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.032806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.033002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.033167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.033194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.033323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.033459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.033489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.033657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.033874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.033918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.034143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.034307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.034334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.034554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.034750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.034792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.034948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.035121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.035148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.035317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.035507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.035534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.035698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.035860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.035886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.036047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.036235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.036262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.036454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.036689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.036737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.036945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.037110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.037136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.037272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.037463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.037492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.037656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.037914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.037969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.038158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.038329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.038354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.038609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.038834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.038860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.039077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.039324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.039350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.039590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.039797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.039840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.040057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.040241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.040268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.040480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.040643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.040685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.897 qpair failed and we were unable to recover it. 00:27:01.897 [2024-07-10 15:50:41.040873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.897 [2024-07-10 15:50:41.041052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.041100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.041263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.041535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.041561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.041724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.041966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.042019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.042215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.042380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.042406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.042563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.042804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.042857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.043083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.043321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.043346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.043522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.043709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.043752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.043972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.044223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.044265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.044472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.044660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.044704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.044929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.045125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.045167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.045311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.045479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.045510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.045679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.045870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.045913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.046104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.046344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.046385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.046600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.046777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.046821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.046975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.047200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.047243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.047449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.047615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.047640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.047861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.048097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.048139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.048334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.048503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.048530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.048706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.048903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.048928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.049144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.049348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.049373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.049548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.049707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.049754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.049939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.050210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.050275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.050476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.050683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.050727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.050936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.051175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.051201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.051344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.051502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.051546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.051716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.051928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.051972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.052166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.052373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.052399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.052573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.052781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.052827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.052985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.053148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.053190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.053379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.053570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.053614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.053807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.054015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.054041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.898 qpair failed and we were unable to recover it. 00:27:01.898 [2024-07-10 15:50:41.054199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.898 [2024-07-10 15:50:41.054385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.054411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.054607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.054771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.054796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.054978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.055194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.055221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.055386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.055579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.055624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.055811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.056014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.056043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.056259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.056506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.056533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.056723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.056864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.056889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.057095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.057295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.057321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.057501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.057681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.057709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.057870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.058060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.058101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.058294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.058452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.058478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.058664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.058897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.058926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.059166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.059321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.059347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.059517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.059667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.059711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.059922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.060128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.060153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.060312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.060474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.060500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.060634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.060768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.060793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.061117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.061339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.061365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.061526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.061834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.061884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.062071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.062276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.062301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.062490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.062703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.062728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.063005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.063216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.063242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.063377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.063566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.063608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.063793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.063985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.064028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.064240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.064372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.064397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.064592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.064771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.064816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.064994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.065172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.065197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.065326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.065489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.065516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.065671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.065935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.065978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.066192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.066370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.066396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.066573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.066761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.899 [2024-07-10 15:50:41.066803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.899 qpair failed and we were unable to recover it. 00:27:01.899 [2024-07-10 15:50:41.066992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.067310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.067361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.067562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.067765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.067809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.068161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.068374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.068400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.068580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.068793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.068845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.069056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.069368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.069412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.069581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.069770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.069795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.069985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.070178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.070221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.070389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.070544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.070587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.070802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.070985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.071011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.071201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.071345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.071371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.071551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.071758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.071783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.071962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.072285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.072336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.072527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.072671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.072698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.072938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.073096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.073121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.073283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.073450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.073477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.073665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.073880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.073906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.074078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.074240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.074266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.074452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.074622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.074647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.074838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.075176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.075225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.075416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.075590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.075616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.075805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.076047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.076098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.076260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.076423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.076453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.076587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.076773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.076799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.076997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.077155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.077180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.077307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.077476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.077504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.077692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.077879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.077923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.078198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.078350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.078374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.078554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.078709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.078750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.078976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.079140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.079165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.079327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.079497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.079524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.079753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.079910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.900 [2024-07-10 15:50:41.079952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.900 qpair failed and we were unable to recover it. 00:27:01.900 [2024-07-10 15:50:41.080116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.080286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.080327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.080519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.080697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.080740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.080960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.081147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.081171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.081350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.081537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.081580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.081784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.081943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.081983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.082153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.082280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.082306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.082593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.082785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.082810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.083014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.083203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.083229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.083400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.083577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.083622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.083857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.084070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.084113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.084261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.084459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.084490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.084690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.085009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.085051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.085262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.085481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.085507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.085735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.085936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.085980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.086192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.086420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.086454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.086648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.086868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.086914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.087078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.087272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.087298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.087439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.087603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.087628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.087799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.087963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.087989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.088149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.088321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.088347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.088514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.088708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.088733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.088907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.089117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.089161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.089324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.089511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.089556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.089756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.090003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.090056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.090217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.090472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.090502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.090732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.091018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.091069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.091238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.091402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.901 [2024-07-10 15:50:41.091436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.901 qpair failed and we were unable to recover it. 00:27:01.901 [2024-07-10 15:50:41.091606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.091793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.091820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.092020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.092246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.092273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.092490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.092695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.092723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.092956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.093134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.093164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.093377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.093576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.093618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.093790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.094026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.094070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.094263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.094421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.094453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.094676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.094875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.094931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.095203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.095384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.095411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.095651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.095919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.095947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.096109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.096315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.096340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.096556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.096761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.096812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.097025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.097226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.097254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.097418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.097647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.097673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.097865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.098029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.098072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.098268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.098439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.098467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.098622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.098845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.098889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.099102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.099260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.099286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.099452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.099603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.099648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.099836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.100050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.100078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.100245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.100450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.100489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.100648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.100972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.101025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.101213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.101379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.101407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.101622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.101821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.101863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.102125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.102313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.102339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.102534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.102668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.102698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.102876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.103074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.103118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.103309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.103490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.103519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.103746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.104067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.104118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.104285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.104436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.104473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.902 qpair failed and we were unable to recover it. 00:27:01.902 [2024-07-10 15:50:41.104613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.902 [2024-07-10 15:50:41.104776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.104820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.105006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.105212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.105243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.105373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.105537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.105564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.105763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.106051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.106076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.106240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.106406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.106438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.106622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.106779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.106825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.106990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.107290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.107341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.107544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.107687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.107713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.107897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.108134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.108177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.108440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.108756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.108806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.109063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.109285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.109328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.109504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.109775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.109840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.110038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.110203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.110230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.110380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.110546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.110586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.110830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.111280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.111336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.111542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.111681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.111710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.111948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.112126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.112151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.112338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.112534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.112579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.112759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.112901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.112927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.113098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.113299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.113325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.113478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.113688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.113717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.113964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.114143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.114172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.114369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.114693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.114747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.114925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.115105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.115132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.115302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.115568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.115620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.115802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.116136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.116189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.116353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.116581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.116625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.116824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.117125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.117179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.117376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.117625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.117669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.117858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.118090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.118119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.118328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.118609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.903 [2024-07-10 15:50:41.118635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.903 qpair failed and we were unable to recover it. 00:27:01.903 [2024-07-10 15:50:41.118820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.119023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.119069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.119237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.119374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.119401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.119632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.120014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.120065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.120193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.120326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.120350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.120562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.120791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.120834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.121076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.121247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.121273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.121449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.121676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.121706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.121994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.122188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.122232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.122378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.122598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.122641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.122893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.123118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.123161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.123351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.123528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.123555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.123773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.124009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.124038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.124263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.124453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.124481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.124631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.124812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.124856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.125003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.125239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.125280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.125441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.125647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.125691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.125855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.126108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.126159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.126335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.126585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.126629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.126824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.127089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.127117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.127299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.127483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.127513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.127687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.128028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.128078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.128272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.128435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.128461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.128647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.128849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.128876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.129056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.129240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.129266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.129392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.129593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.129619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.129786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.129992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.130036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.130255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.130435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.130462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.130685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.130979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.131035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.131250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.131421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.131459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.131626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.131815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.131862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.132121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.132301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.904 [2024-07-10 15:50:41.132327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.904 qpair failed and we were unable to recover it. 00:27:01.904 [2024-07-10 15:50:41.132550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.132786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.132831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.133045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.133438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.133507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.133675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.133913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.133962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.134172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.134323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.134348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.134511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.134686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.134729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.134910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.135191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.135244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.135440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.135768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.135818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.136036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.136208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.136235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.136400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.136627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.136656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.136873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.137067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.137130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.137268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.137439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.137478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.137672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.137903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.137951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.138234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.138413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.138447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.138655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.138891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.138952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.139141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.139349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.139376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.139565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.139767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.139811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.140165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.140375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.140401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.140582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.140898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.140948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.141130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.141309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.141336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.141556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.141759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.141802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.142025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.142174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.142201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.142392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.142633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.142677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.142888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.143039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.143081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.143218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.143407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.143441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.143643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.143854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.143897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.144052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.144260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.144287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.144489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.144653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.144696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.144891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.145084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.145131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.145296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.145501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.145545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.145738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.145946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.145990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.146159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.146343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.905 [2024-07-10 15:50:41.146370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.905 qpair failed and we were unable to recover it. 00:27:01.905 [2024-07-10 15:50:41.146562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.146767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.146810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.146966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.147230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.147283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.147501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.147757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.147819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.148012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.148188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.148215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.148403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.148601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.148645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.148944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.149243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.149272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.149490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.149687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.149742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.149918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.150081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.150123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.150266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.150457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.150488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.150653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.150866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.150910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.151096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.151302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.151328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.151525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.151707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.151749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.151932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.152136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.152180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.152378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.152538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.152582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.152793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.153066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.153113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.153271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.153406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.153439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.153632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.153938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.153991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.154258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.154418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.154478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.154696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.154984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.155036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.155305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.155492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.155521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.155710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.155904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.155947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.156229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.156430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.156457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.156617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.156801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.156844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.157016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.157189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.157233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.157422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.157556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.157582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.157857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.158154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.158207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.158394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.158544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.906 [2024-07-10 15:50:41.158570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.906 qpair failed and we were unable to recover it. 00:27:01.906 [2024-07-10 15:50:41.158756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.159028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.159074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.159237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.159402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.159437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.159628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.159956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.160013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.160222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.160397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.160432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.160610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.160777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.160827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.161050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.161267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.161295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.161443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.161635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.161687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.161908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.162115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.162164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.162333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.162521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.162571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.162736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.162971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.163015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.163191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.163384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.163413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.163630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.163811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.163860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.164087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.164272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.164301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.164533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.164727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.164774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.164968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.165172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.165220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.165415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.165653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.165699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.165930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.166139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.166187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.166383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.166573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.166619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.166830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.167041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.167090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.167264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.167460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.167494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.167655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.167886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.167931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.168094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.168278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.168309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.168498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.168715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.168764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.168979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.169252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.169308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.169483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.169676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.169709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.169925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.170133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.170181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.170383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.170556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.170606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.170802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.171004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.171050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.171247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.171419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.171467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.171687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.171927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.171973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.907 [2024-07-10 15:50:41.172319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.172544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.907 [2024-07-10 15:50:41.172576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.907 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.172733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.172924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.172969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.173125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.173290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.173329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.173531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.173737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.173782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.173993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.174203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.174235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.174434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.174567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.174598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.174749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.174914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.174944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.175105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.175308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.175336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.175521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.175694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.175741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.175893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.176092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.176141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.176314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.176450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.176478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.176658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.176835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.176880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.177149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.177331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.177368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.177536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.177749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.177801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.177994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.178149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.178179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.178351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.178539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.178586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.178746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.178910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.178942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.179118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.179311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.179343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.179535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.179743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.179793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.179968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.180164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.180197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.180369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.180582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.180616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.180819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.181030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.181078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.181267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.181463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.181497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.181680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.181892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.181937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.182130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.182283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.182312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.182480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.182690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.182722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.182869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.183089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.183139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.183338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.183531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.183578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.183770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.183979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.184024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.184196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.184367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.184397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.184634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.184886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.184935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.908 qpair failed and we were unable to recover it. 00:27:01.908 [2024-07-10 15:50:41.185106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-10 15:50:41.185291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.185325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.185541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.185759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.185816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.185981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.186172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.186205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.186378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.186568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.186598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.186745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.186965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.187011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.187158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.187330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.187358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.187550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.187791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.187840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.188039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.188253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.188284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.188502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.188692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.188746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.188933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.189145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.189176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.189349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.189512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.189567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.189774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.189932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.189969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.190161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.190317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.190350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.190575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.190758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.190791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.190953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.191105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.191138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.191324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.191522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.191549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.191739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.191912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.191941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.192093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.192246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.192276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.192468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.192602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.192629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.192796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.192982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.193011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.193219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.193376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.193405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.193596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.193762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.193789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.193935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.194072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.194116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.194288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.194492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.194519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.194653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.194821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.194851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.195037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.195213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.195242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.195420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.195580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.195607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.195774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.195962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.195988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.196200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.196360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.196388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.196560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.196726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.196753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.196916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.197079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.197106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.909 qpair failed and we were unable to recover it. 00:27:01.909 [2024-07-10 15:50:41.197252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-10 15:50:41.197484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.197514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.197656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.197790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.197817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.198009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.198180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.198207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.198379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.198550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.198577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.198764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.198959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.198985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.199196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.199343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.199373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.199565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.199731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.199760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.199892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.200081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.200110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.200319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.200507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.200534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.200694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.200882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.200909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.201102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.201234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.201260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.201411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.201597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.201624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.201811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.202000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.202026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.202189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.202344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.202373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.202562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.202787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.202856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.203021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.203234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.203281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.203479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.203675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.203724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.203981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.204264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.204318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.204540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.204740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.204791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.205009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.205196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.205224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.205390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.205562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.205611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.205786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.206066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.206118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.206291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.206509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.206563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.206764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.206970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.207019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.207169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.207358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.207393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.207637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.207819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.207868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.910 qpair failed and we were unable to recover it. 00:27:01.910 [2024-07-10 15:50:41.208069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.208236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-10 15:50:41.208271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.208439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.208629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.208679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.208863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.209073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.209120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.209258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.209443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.209472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.209663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.209876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.209923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.210120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.210311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.210345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.210545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.210754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.210801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.211031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.211190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.211222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.211402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.211605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.211654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.211815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.211995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.212041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.212244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.212413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.212451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.212604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.212802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.212853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.213150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.213360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.213391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.213625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.213829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.213880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.214044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.214273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.214332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.214520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.214758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.214791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.214979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.215185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.215215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.215416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.215577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.215604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.215790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.215978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.216004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.216162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.216350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.216376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.216549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.216698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.216726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.216888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.217080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.217123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.217331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.217502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.217529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.217685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.217850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.217879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.218014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.218205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.218231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.218401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.218551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.218577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.218744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.218883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.218927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.219111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.219293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.219322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.219521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.219696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.219723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.219856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.220016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.220045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.220261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.220463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.220506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.911 qpair failed and we were unable to recover it. 00:27:01.911 [2024-07-10 15:50:41.220703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.220865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.911 [2024-07-10 15:50:41.220891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.221055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.221253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.221282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.221500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.221654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.221680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.221847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.222011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.222041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.222241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.222387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.222416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.222638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.222784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.222811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.222970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.223111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.223138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.223325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.223462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.223489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.223654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.223791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.223818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.224002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.224188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.224215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.224379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.224540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.224569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.224734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.224904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.224931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.225172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.225368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.225394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.225570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.225726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.225752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.225923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.226079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.226108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.226270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.226431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.226476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.226639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.226802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.226828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.227002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.227143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.227169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.227311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.227445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.227473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.227663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.227801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.227830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.228021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.228186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.228213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.228401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.228540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.228569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.228734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.228877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.228904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.229101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.229254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.229284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.229430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.229597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.229624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.229798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.229996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.230022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.230164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.230326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.230352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.230514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.230678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.230705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.230840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.230974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.231000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.231163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.231297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.231325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.231486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.231630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.912 [2024-07-10 15:50:41.231658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.912 qpair failed and we were unable to recover it. 00:27:01.912 [2024-07-10 15:50:41.231823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.231990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.232016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.232181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.232373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.232401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.232580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.232770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.232796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.232930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.233093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.233120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.233271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.233407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.233440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.233609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.233748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.233775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.233940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.234108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.234134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.234300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.234494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.234522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.234687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.234819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.234845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.235008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.235149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.235176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.235320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.235487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.235516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.235707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.235841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.235872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.236011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.236175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.236201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.236371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.236515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.236547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.236715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.236904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.236931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.237093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.237252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.237280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.237448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.237609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.237635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.237800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.237994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.238021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.238181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.238371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.238400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.238610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.238743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.238770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.238936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.239126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.239153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.239294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.239492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.239519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.239661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.239793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.239819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.239980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.240117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.240148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.240345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.240510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.240537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.240727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.240885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.240912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.241051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.241213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.241240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.241406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.241576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.241604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.241767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.241904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.241935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.242109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.242270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.242299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.242446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.242634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.913 [2024-07-10 15:50:41.242661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.913 qpair failed and we were unable to recover it. 00:27:01.913 [2024-07-10 15:50:41.242800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.243019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.243046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.243247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.243386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.243413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.243609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.243798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.243830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.243993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.244150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.244178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.244363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.244559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.244587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.244777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.244920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.244947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.245122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.245262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.245290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.245455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.245595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.245624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.245785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.245945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.245972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.246162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.246323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.246350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.246511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.246672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.246699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.246823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.247005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.247035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.247225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.247386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.247417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.247587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.247776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.247803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.247963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.248140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.248166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.248334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.248506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.248535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.248702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.248894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.248921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.249082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.249273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.249300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.249469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.249635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.249662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.249800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.249987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.250013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.250254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.250434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.250461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.250626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.250754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.250781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.250929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.251119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.251146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.251315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.251504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.251532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.251700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.251836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.251863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.252053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.252225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.252253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.914 [2024-07-10 15:50:41.252446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.252582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.914 [2024-07-10 15:50:41.252608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.914 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.252776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.252941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.252968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.253128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.253265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.253292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.253458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.253626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.253653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.253813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.254012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.254039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.254207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.254370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.254397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.254557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.254702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.254728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.254891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.255018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.255044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.255210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.255402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.255434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.255570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.255725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.255756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.255938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.256126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.256151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.256290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.256419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.256454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.256640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.256826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.256854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.257040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.257173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.257198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.257378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.257573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.257601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.257811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.257991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.258052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.258220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.258379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.258405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.258614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.258818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.258848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.259054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.259199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.259228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.259410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.259597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.259625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.259790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.259955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.259999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.260249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.260446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.260473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.260635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.260871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.260924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.261100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.261275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.261303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.261471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.261631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.261658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.261811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.262024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.262050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.262256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.262443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.915 [2024-07-10 15:50:41.262473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:01.915 qpair failed and we were unable to recover it. 00:27:01.915 [2024-07-10 15:50:41.262659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.188 [2024-07-10 15:50:41.262839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.188 [2024-07-10 15:50:41.262869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.188 qpair failed and we were unable to recover it. 00:27:02.188 [2024-07-10 15:50:41.263055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.188 [2024-07-10 15:50:41.263186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.188 [2024-07-10 15:50:41.263213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.188 qpair failed and we were unable to recover it. 00:27:02.188 [2024-07-10 15:50:41.263381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.188 [2024-07-10 15:50:41.263566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.188 [2024-07-10 15:50:41.263595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.188 qpair failed and we were unable to recover it. 00:27:02.188 [2024-07-10 15:50:41.263798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.188 [2024-07-10 15:50:41.263942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.188 [2024-07-10 15:50:41.263973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.188 qpair failed and we were unable to recover it. 00:27:02.188 [2024-07-10 15:50:41.264125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.188 [2024-07-10 15:50:41.264302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.188 [2024-07-10 15:50:41.264330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.188 qpair failed and we were unable to recover it. 00:27:02.188 [2024-07-10 15:50:41.264490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.188 [2024-07-10 15:50:41.264650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.188 [2024-07-10 15:50:41.264677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.188 qpair failed and we were unable to recover it. 00:27:02.188 [2024-07-10 15:50:41.264813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.188 [2024-07-10 15:50:41.264998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.188 [2024-07-10 15:50:41.265024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.265205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.265389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.265418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.265588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.265761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.265805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.266014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.266166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.266195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.266414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.266593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.266619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.266776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.266955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.266984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.267139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.267346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.267376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.267610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.267794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.267823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.268003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.268209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.268237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.268441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.268612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.268639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.268827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.268996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.269021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.269208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.269388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.269417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.269586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.269798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.269827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.269997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.270147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.270176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.270391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.270586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.270612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.270799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.270962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.270988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.271174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.271355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.271384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.271565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.271769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.271797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.271974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.272166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.272195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.272384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.272577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.272605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.272828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.273087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.273114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.273295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.273514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.273542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.189 qpair failed and we were unable to recover it. 00:27:02.189 [2024-07-10 15:50:41.273727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.189 [2024-07-10 15:50:41.273870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.273900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.274084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.274215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.274242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.274383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.274540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.274567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.274745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.274886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.274916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.275124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.275331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.275360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.275568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.275753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.275782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.275960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.276135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.276163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.276339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.276541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.276571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.276713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.276895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.276924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.277069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.277226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.277252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.277423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.277604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.277631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.277814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.277995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.278024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.278238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.278394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.278444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.278654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.278802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.278830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.279034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.279331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.279382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.279560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.279706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.279735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.279937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.280093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.280119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.280307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.280486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.280517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.280769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.281021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.281051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.281260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.281445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.281473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.281644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.281846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.281876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.282084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.282302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.282365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.282581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.282723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.282750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.282880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.283066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.283107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.283291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.283471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.283500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.283681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.283820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.283845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.283989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.284168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.284197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.284344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.284528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.284558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.190 [2024-07-10 15:50:41.284742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.284899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.190 [2024-07-10 15:50:41.284925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.190 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.285086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.285218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.285263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.285452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.285585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.285610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.285805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.285978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.286009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.286189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.286376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.286403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.286574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.286750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.286779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.286987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.287165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.287193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.287347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.287516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.287546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.287733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.287937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.287965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.288172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.288335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.288361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.288524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.288719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.288748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.288957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.289120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.289146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.289300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.289436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.289462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.289656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.289839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.289869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.290018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.290193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.290223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.290432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.290638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.290664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.290854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.291011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.291037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.291225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.291365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.291394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.291567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.291697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.291725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.291870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.292057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.292101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.292266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.292433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.292459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.292652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.292840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.292867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.293052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.293265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.293292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.293478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.293655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.293686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.293849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.294045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.294072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.191 [2024-07-10 15:50:41.294238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.294432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.191 [2024-07-10 15:50:41.294462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.191 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.294671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.294835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.294876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.295054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.295329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.295384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.295583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.295789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.295818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.296003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.296178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.296207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.296388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.296611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.296637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.296784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.296982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.297012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.297196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.297355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.297380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.297575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.297760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.297810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.298140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.298343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.298378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.298546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.298720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.298748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.298917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.299069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.299098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.299302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.299513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.299543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.299721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.300009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.300035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.300214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.300413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.300449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.300602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.300813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.300840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.301005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.301192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.301219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.301422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.301617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.301644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.301854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.302082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.302109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.302245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.302397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.302439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.302652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.302836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.302864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.303071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.303301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.192 [2024-07-10 15:50:41.303354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.192 qpair failed and we were unable to recover it. 00:27:02.192 [2024-07-10 15:50:41.303539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.303727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.303754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.193 [2024-07-10 15:50:41.303919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.304103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.304132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.193 [2024-07-10 15:50:41.304291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.304454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.304500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.193 [2024-07-10 15:50:41.304776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.304966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.304992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.193 [2024-07-10 15:50:41.305178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.305364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.305390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.193 [2024-07-10 15:50:41.305562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.305723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.305767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.193 [2024-07-10 15:50:41.305982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.306278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.306307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.193 [2024-07-10 15:50:41.306509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.306662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.306694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.193 [2024-07-10 15:50:41.306836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.307009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.307039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.193 [2024-07-10 15:50:41.307207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.307395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.307423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.193 [2024-07-10 15:50:41.307619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.307754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.307798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.193 [2024-07-10 15:50:41.308093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.308338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.308365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.193 [2024-07-10 15:50:41.308540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.308680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.308706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.193 [2024-07-10 15:50:41.308886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.309071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.309097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.193 [2024-07-10 15:50:41.309263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.309471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.309501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.193 [2024-07-10 15:50:41.309724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.309908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.309934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.193 [2024-07-10 15:50:41.310096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.310254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.310281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.193 [2024-07-10 15:50:41.310471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.310663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.310693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.193 [2024-07-10 15:50:41.310852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.311030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.311059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.193 [2024-07-10 15:50:41.311219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.311442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.311487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.193 [2024-07-10 15:50:41.311660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.311826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.193 [2024-07-10 15:50:41.311852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.193 qpair failed and we were unable to recover it. 00:27:02.194 [2024-07-10 15:50:41.312013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.312174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.312201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.194 qpair failed and we were unable to recover it. 00:27:02.194 [2024-07-10 15:50:41.312334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.312486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.312513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.194 qpair failed and we were unable to recover it. 00:27:02.194 [2024-07-10 15:50:41.312691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.312881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.312908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.194 qpair failed and we were unable to recover it. 00:27:02.194 [2024-07-10 15:50:41.313117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.313297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.313322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.194 qpair failed and we were unable to recover it. 00:27:02.194 [2024-07-10 15:50:41.313531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.313672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.313701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.194 qpair failed and we were unable to recover it. 00:27:02.194 [2024-07-10 15:50:41.313885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.314076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.314102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.194 qpair failed and we were unable to recover it. 00:27:02.194 [2024-07-10 15:50:41.314292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.314523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.314574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.194 qpair failed and we were unable to recover it. 00:27:02.194 [2024-07-10 15:50:41.314766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.314953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.314996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.194 qpair failed and we were unable to recover it. 00:27:02.194 [2024-07-10 15:50:41.315171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.315358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.315385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.194 qpair failed and we were unable to recover it. 00:27:02.194 [2024-07-10 15:50:41.315593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.315743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.315774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.194 qpair failed and we were unable to recover it. 00:27:02.194 [2024-07-10 15:50:41.315951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.316161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.316190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.194 qpair failed and we were unable to recover it. 00:27:02.194 [2024-07-10 15:50:41.316338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.316530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.316558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.194 qpair failed and we were unable to recover it. 00:27:02.194 [2024-07-10 15:50:41.316692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.316852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.316879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.194 qpair failed and we were unable to recover it. 00:27:02.194 [2024-07-10 15:50:41.317068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.317279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.317308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.194 qpair failed and we were unable to recover it. 00:27:02.194 [2024-07-10 15:50:41.317520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.317701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.317729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.194 qpair failed and we were unable to recover it. 00:27:02.194 [2024-07-10 15:50:41.317899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.318050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.318079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.194 qpair failed and we were unable to recover it. 00:27:02.194 [2024-07-10 15:50:41.318229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.318439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.194 [2024-07-10 15:50:41.318467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.194 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.318612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.318769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.318813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.319071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.319265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.319294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.319492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.319630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.319657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.319835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.320037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.320066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.320279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.320440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.320481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.320672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.320887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.320928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.321123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.321294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.321323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.321499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.321705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.321733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.321919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.322170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.322199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.322381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.322564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.322593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.322769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.322972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.323000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.323179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.323359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.323389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.323574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.323747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.323776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.323934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.324144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.324170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.324405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.324721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.324774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.324956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.325169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.325197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.325377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.325579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.325608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.325767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.325983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.326009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.326196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.326331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.326357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.326543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.326708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.326751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.326959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.327138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.327179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.327358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.327525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.327569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.195 qpair failed and we were unable to recover it. 00:27:02.195 [2024-07-10 15:50:41.327748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.195 [2024-07-10 15:50:41.327953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.327983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.328172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.328455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.328485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.328672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.328948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.329001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.329238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.329418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.329457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.329635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.329810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.329840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.330062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.330251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.330281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.330476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.330650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.330680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.330884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.331055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.331084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.331268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.331420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.331457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.331662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.331838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.331868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.332100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.332276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.332305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.332480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.332653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.332683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.332865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.333024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.333065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.333252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.333436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.333466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.333656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.333935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.333986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.334343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.334568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.334596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.334765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.334910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.334935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.335149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.335349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.335377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.335637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.335793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.335822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.336025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.336233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.336290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.336525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.336693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.336719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.336904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.337055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.337083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.337256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.337396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.337430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.196 [2024-07-10 15:50:41.337613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.337764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.196 [2024-07-10 15:50:41.337792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.196 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.338000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.338169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.338198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.338417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.338585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.338611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.338742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.338926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.338955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.339134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.339316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.339344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.339616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.339859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.339887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.340060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.340290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.340344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.340526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.340751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.340776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.340973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.341161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.341189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.341372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.341560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.341585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.341796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.341951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.341977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.342114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.342282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.342324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.342500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.342659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.342684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.342884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.343104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.343156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.343331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.343556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.343583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.343752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.343958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.343986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.344176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.344382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.344411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.344635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.344843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.344872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.345033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.345188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.345213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.345402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.345610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.345635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.345832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.345970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.345996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.346202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.346421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.346454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.346608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.346738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.346763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.346964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.347274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.347334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.347549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.347707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.197 [2024-07-10 15:50:41.347733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.197 qpair failed and we were unable to recover it. 00:27:02.197 [2024-07-10 15:50:41.347908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.348051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.348076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.348202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.348338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.348362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.348518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.348676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.348712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.348876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.349042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.349068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.349193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.349418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.349454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.349645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.349813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.349838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.350054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.350253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.350281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.350464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.350633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.350658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.350872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.351081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.351106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.351287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.351471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.351498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.351682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.351866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.351895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.352076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.352236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.352280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.352526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.352704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.352730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.352863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.353065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.353127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.353304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.353519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.353545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.353729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.353942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.353970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.354155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.354326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.354355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.354547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.354676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.354710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.354915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.355073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.355099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.355269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.355438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.355490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.355666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.355925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.355951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.356115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.356307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.356337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.356520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.356682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.356712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.198 qpair failed and we were unable to recover it. 00:27:02.198 [2024-07-10 15:50:41.356896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.357099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.198 [2024-07-10 15:50:41.357129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.357311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.357453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.357498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.357636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.357835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.357864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.358044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.358226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.358255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.358438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.358605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.358630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.358761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.358940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.358969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.359174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.359378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.359407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.359612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.359766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.359799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.359975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.360187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.360217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.360365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.360574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.360605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.360783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.360951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.360980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.361150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.361326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.361356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.361536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.361746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.361775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.361979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.362151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.362180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.362327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.362514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.362543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.362695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.362903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.362932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.363120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.363311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.363354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.363547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.363691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.363721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.363855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.363983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.364008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.364195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.364369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.364398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.364590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.364750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.364776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.364965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.365147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.365175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.365353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.365539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.365567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.365730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.365868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.199 [2024-07-10 15:50:41.365895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.199 qpair failed and we were unable to recover it. 00:27:02.199 [2024-07-10 15:50:41.366084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.366264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.366293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.366487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.366647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.366689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.366837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.367012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.367041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.367225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.367383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.367413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.367661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.367817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.367844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.368028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.368216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.368242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.368402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.368634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.368663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.368839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.369052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.369078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.369268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.369453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.369483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.369684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.369851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.369880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.370096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.370227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.370254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.370442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.370592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.370620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.370791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.370956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.370982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.371174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.371384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.371417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.371618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.371828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.371854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.372094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.372386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.372412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.372571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.372785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.372812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.373188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.373416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.373453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.373626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.373808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.373837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.374042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.374211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.374240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.374499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.374781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.374810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.200 [2024-07-10 15:50:41.374989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.375210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.200 [2024-07-10 15:50:41.375236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.200 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.375510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.375720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.375749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.375920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.376097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.376128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.376395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.376591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.376621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.376796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.376975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.377005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.377184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.377361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.377391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.377576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.377789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.377815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.378008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.378154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.378182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.378390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.378603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.378633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.378807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.379044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.379096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.379308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.379480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.379510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.379697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.379824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.379850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.380145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.380368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.380397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.380585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.380765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.380794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.380969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.381111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.381140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.381293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.381430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.381457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.381689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.381819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.381847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.382018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.382192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.382223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.382434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.382579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.382607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.382800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.382966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.382992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.383211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.383407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.383452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.201 [2024-07-10 15:50:41.383632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.383813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.201 [2024-07-10 15:50:41.383843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.201 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.384022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.384257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.384320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.384508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.384698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.384725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.384851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.385083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.385113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.385258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.385438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.385469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.385684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.385836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.385861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.386020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.386158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.386184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.386313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.386475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.386501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.386689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.386963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.387014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.387178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.387344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.387370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.387606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.387783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.387812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.387999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.388205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.388234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.388412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.388608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.388637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.388814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.389018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.389047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.389234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.389415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.389453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.389639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.389801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.389828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.390012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.390277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.390327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.390500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.390687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.390713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.391073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.391251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.391280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.391511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.391691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.391720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.391906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.392135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.392161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.392381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.392559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.392589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.392783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.392963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.392991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.393169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.393343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.393372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.393530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.393682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.393711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.393861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.393997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.394023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.394151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.394319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.394344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.394525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.394653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.394679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.394974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.395175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.395204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.202 qpair failed and we were unable to recover it. 00:27:02.202 [2024-07-10 15:50:41.395356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.202 [2024-07-10 15:50:41.395529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.395559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.395741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.396147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.396205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.396414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.396573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.396602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.396788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.396939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.396967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.397147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.397351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.397380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.397568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.397777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.397802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.397999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.398196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.398226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.398392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.398610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.398637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.398826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.399005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.399034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.399217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.399399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.399446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.399621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.399827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.399856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.400060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.400208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.400237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.400419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.400603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.400632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.400801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.400941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.400968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.401107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.401272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.401314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.401491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.401669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.401698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.401894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.402110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.402139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.402339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.402527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.402555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.402685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.402897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.402926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.403116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.403272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.403300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.403502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.403670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.403699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.403875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.404072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.404122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.404303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.404530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.404581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.404764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.404986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.405040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.203 qpair failed and we were unable to recover it. 00:27:02.203 [2024-07-10 15:50:41.405217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.203 [2024-07-10 15:50:41.405435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.405462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.405649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.405914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.405978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.406308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.406535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.406566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.406765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.407034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.407089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.407243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.407395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.407441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.407643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.407904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.407956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.408258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.408498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.408524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.408722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.408921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.408949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.409140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.409261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.409285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.409462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.409597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.409623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.409786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.409924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.409950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.410097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.410223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.410249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.410418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.410613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.410640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.410781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.410940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.410967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.411133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.411266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.411292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.411455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.411620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.411647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.411782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.411945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.411974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.412137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.412323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.412349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.412542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.412707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.412735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.412923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.413069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.413095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.413223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.413412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.413446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.413590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.413744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.413772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.413916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.414077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.414104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.204 qpair failed and we were unable to recover it. 00:27:02.204 [2024-07-10 15:50:41.414269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.204 [2024-07-10 15:50:41.414405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.414437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.205 qpair failed and we were unable to recover it. 00:27:02.205 [2024-07-10 15:50:41.414628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.414768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.414799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.205 qpair failed and we were unable to recover it. 00:27:02.205 [2024-07-10 15:50:41.414944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.415164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.415191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.205 qpair failed and we were unable to recover it. 00:27:02.205 [2024-07-10 15:50:41.415356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.415497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.415524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.205 qpair failed and we were unable to recover it. 00:27:02.205 [2024-07-10 15:50:41.415688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.415824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.415852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.205 qpair failed and we were unable to recover it. 00:27:02.205 [2024-07-10 15:50:41.415985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.416170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.416196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.205 qpair failed and we were unable to recover it. 00:27:02.205 [2024-07-10 15:50:41.416355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.416522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.416549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.205 qpair failed and we were unable to recover it. 00:27:02.205 [2024-07-10 15:50:41.416713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.416852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.416879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.205 qpair failed and we were unable to recover it. 00:27:02.205 [2024-07-10 15:50:41.417040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.417204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.417230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.205 qpair failed and we were unable to recover it. 00:27:02.205 [2024-07-10 15:50:41.417391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.417531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.417558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.205 qpair failed and we were unable to recover it. 00:27:02.205 [2024-07-10 15:50:41.417723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.417911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.417939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.205 qpair failed and we were unable to recover it. 00:27:02.205 [2024-07-10 15:50:41.418098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.418243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.418268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.205 qpair failed and we were unable to recover it. 00:27:02.205 [2024-07-10 15:50:41.418409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.418606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.418633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.205 qpair failed and we were unable to recover it. 00:27:02.205 [2024-07-10 15:50:41.418795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.418936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.418963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.205 qpair failed and we were unable to recover it. 00:27:02.205 [2024-07-10 15:50:41.419120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.419283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.419308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.205 qpair failed and we were unable to recover it. 00:27:02.205 [2024-07-10 15:50:41.419478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.419641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.419668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.205 qpair failed and we were unable to recover it. 00:27:02.205 [2024-07-10 15:50:41.419831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.420030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.420056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.205 qpair failed and we were unable to recover it. 00:27:02.205 [2024-07-10 15:50:41.420201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.420348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.420374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.205 qpair failed and we were unable to recover it. 00:27:02.205 [2024-07-10 15:50:41.420569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.205 [2024-07-10 15:50:41.420726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.420752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.420943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.421086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.421113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.421250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.421405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.421448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.421615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.421751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.421778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.421967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.422135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.422161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.422325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.422459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.422486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.422649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.422825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.422852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.423011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.423176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.423201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.423396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.423569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.423601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.423762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.423895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.423921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.424105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.424245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.424271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.424438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.424603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.424629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.424792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.424953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.424981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.425154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.425292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.425318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.425509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.425643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.425670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.425807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.425945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.425974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.426167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.426353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.426380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.426550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.426692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.426720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.426882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.427071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.427101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.427268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.427439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.427470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.427635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.427768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.427794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.427935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.428092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.428119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.428280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.428449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.428477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.428614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.428773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.428800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.428965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.429152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.429181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.429352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.429510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.429537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.429707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.429893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.429922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.430088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.430249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.430276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.430461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.430622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.430653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.206 [2024-07-10 15:50:41.430838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.431007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.206 [2024-07-10 15:50:41.431034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.206 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.431163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.431325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.431352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.431524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.431657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.431684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.431813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.431971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.431997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.432154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.432296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.432326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.432513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.432673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.432700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.432862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.433019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.433045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.433236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.433400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.433433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.433574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.433733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.433761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.433927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.434065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.434096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.434258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.434395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.434421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.434618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.434764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.434795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.434936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.435071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.435099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.435293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.435437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.435464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.435627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.435767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.435795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.435939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.436104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.436130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.436320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.436489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.436516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.436679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.436846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.436873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.437054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.437257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.437287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.437484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.437676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.437702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.437879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.438062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.438089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.438253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.438397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.438437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.438635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.438766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.438794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.438927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.439089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.439115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.439275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.439436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.439462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.439628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.439785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.439812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.439948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.440110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.440136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.440329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.440475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.440501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.440656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.440816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.440844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.440983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.441121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.441148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.441314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.441473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.441500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.207 qpair failed and we were unable to recover it. 00:27:02.207 [2024-07-10 15:50:41.441686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.207 [2024-07-10 15:50:41.441850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.441876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.442010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.442173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.442199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.442331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.442509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.442536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.442696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.442826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.442853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.443011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.443171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.443197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.443359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.443528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.443555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.443692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.443850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.443877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.444039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.444198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.444224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.444407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.444580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.444607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.444779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.444920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.444947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.445129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.445282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.445308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.445478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.445650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.445677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.445840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.445994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.446020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.446178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.446345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.446371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.446524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.446676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.446702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.446889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.447052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.447078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.447267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.447407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.447439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.447582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.447718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.447744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.447875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.448003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.448028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.448156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.448321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.448347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.448483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.448648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.448674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.448836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.449001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.449027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.449161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.449302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.449328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.449483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.449621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.449646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.449832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.449993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.450019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.450204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.450338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.450365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.450536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.450695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.450720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.450904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.451067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.451093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.451257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.451399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.451432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.451597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.451761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.451790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.451929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.452114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.452141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.208 qpair failed and we were unable to recover it. 00:27:02.208 [2024-07-10 15:50:41.452306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.208 [2024-07-10 15:50:41.452436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.452462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.452596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.452757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.452782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.452920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.453061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.453087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.453269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.453437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.453464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.453603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.453770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.453796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.453956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.454139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.454165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.454324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.454485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.454513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.454673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.454853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.454879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.455047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.455203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.455234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.455431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.455621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.455646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.455788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.455925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.455951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.456104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.456269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.456296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.456485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.456648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.456674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.456868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.457026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.457052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.457196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.457333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.457360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.457545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.457686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.457711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.457876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.458036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.458062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.458226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.458418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.458452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.458594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.458731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.458757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.458923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.459082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.459108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.459246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.459418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.459451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.459618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.459758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.459784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.459959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.460118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.460145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.460309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.460443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.460471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.460663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.460795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.460821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.460990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.461119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.461146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.209 qpair failed and we were unable to recover it. 00:27:02.209 [2024-07-10 15:50:41.461313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.461454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.209 [2024-07-10 15:50:41.461481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.461642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.461778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.461804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.461984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.462148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.462173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.462344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.462486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.462513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.462711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.462848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.462874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.463037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.463199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.463226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.463391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.463564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.463591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.463783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.463922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.463948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.464085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.464226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.464252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.464387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.464577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.464603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.464741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.464879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.464905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.465047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.465209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.465236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.465382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.465530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.465557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.465726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.465899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.465925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.466088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.466251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.466277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.466483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.466648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.466674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.466837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.466998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.467024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.467186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.467319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.467345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.467535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.467696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.467721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.467885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.468042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.468069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.468206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.468390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.468416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.468590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.468742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.468768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.468925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.469088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.469114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.469278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.469444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.469475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.469638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.469779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.469806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.469968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.470164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.470190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.470378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.470571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.470598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.470737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.470898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.470925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.471066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.471251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.471276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.471410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.471550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.471576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.471729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.471938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.471990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.210 [2024-07-10 15:50:41.472201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.472362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.210 [2024-07-10 15:50:41.472404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.210 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.472619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.472778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.472806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.472963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.473165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.473198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.473351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.473553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.473581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.473738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.473946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.473975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.474287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.474503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.474530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.474687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.474990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.475040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.475243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.475462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.475488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.475647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.475831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.475857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.476012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.476172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.476215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.476435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.476566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.476593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.476794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.476986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.477027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.477214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.477400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.477437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.477603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.477766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.477792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.477956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.478167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.478193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.478358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.478524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.478552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.478721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.478845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.478886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.479130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.479303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.479332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.479508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.479688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.479717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.479880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.480074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.480118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.480274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.480444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.480470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.480634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.480799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.480886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.481065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.481264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.481293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.481481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.481640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.481666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.481829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.481992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.482017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.482160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.482356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.482385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.482582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.482719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.482760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.482944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.483089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.483118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.483329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.483500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.483528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.211 qpair failed and we were unable to recover it. 00:27:02.211 [2024-07-10 15:50:41.483704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.211 [2024-07-10 15:50:41.483908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.483937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.212 qpair failed and we were unable to recover it. 00:27:02.212 [2024-07-10 15:50:41.484134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.484298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.484323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.212 qpair failed and we were unable to recover it. 00:27:02.212 [2024-07-10 15:50:41.484515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.484754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.484806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.212 qpair failed and we were unable to recover it. 00:27:02.212 [2024-07-10 15:50:41.484988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.485220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.485268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.212 qpair failed and we were unable to recover it. 00:27:02.212 [2024-07-10 15:50:41.485454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.485598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.485625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.212 qpair failed and we were unable to recover it. 00:27:02.212 [2024-07-10 15:50:41.485835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.486082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.486133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.212 qpair failed and we were unable to recover it. 00:27:02.212 [2024-07-10 15:50:41.486333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.486483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.486512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.212 qpair failed and we were unable to recover it. 00:27:02.212 [2024-07-10 15:50:41.486694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.486833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.486859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.212 qpair failed and we were unable to recover it. 00:27:02.212 [2024-07-10 15:50:41.487020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.487297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.487326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.212 qpair failed and we were unable to recover it. 00:27:02.212 [2024-07-10 15:50:41.487530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.487678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.487707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.212 qpair failed and we were unable to recover it. 00:27:02.212 [2024-07-10 15:50:41.487909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.488126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.488152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.212 qpair failed and we were unable to recover it. 00:27:02.212 [2024-07-10 15:50:41.488349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.488504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.488530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.212 qpair failed and we were unable to recover it. 00:27:02.212 [2024-07-10 15:50:41.488692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.488909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.488938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.212 qpair failed and we were unable to recover it. 00:27:02.212 [2024-07-10 15:50:41.489138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.489337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.489365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.212 qpair failed and we were unable to recover it. 00:27:02.212 [2024-07-10 15:50:41.489537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.489765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.489821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.212 qpair failed and we were unable to recover it. 00:27:02.212 [2024-07-10 15:50:41.490051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.490201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.490229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.212 qpair failed and we were unable to recover it. 00:27:02.212 [2024-07-10 15:50:41.490411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.490570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.212 [2024-07-10 15:50:41.490599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.212 qpair failed and we were unable to recover it. 00:27:02.212 [2024-07-10 15:50:41.490779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.213 [2024-07-10 15:50:41.491010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.213 [2024-07-10 15:50:41.491066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.213 qpair failed and we were unable to recover it. 00:27:02.213 [2024-07-10 15:50:41.491255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.213 [2024-07-10 15:50:41.491421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.213 [2024-07-10 15:50:41.491456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.213 qpair failed and we were unable to recover it. 00:27:02.213 [2024-07-10 15:50:41.491624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.213 [2024-07-10 15:50:41.491894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.213 [2024-07-10 15:50:41.491946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.213 qpair failed and we were unable to recover it. 00:27:02.213 [2024-07-10 15:50:41.492152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.213 [2024-07-10 15:50:41.492287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.213 [2024-07-10 15:50:41.492313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.213 qpair failed and we were unable to recover it. 00:27:02.213 [2024-07-10 15:50:41.492527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.213 [2024-07-10 15:50:41.492715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.213 [2024-07-10 15:50:41.492741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.213 qpair failed and we were unable to recover it. 00:27:02.213 [2024-07-10 15:50:41.492917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.213 [2024-07-10 15:50:41.493086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.213 [2024-07-10 15:50:41.493156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.213 qpair failed and we were unable to recover it. 00:27:02.213 [2024-07-10 15:50:41.493364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.213 [2024-07-10 15:50:41.493521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.213 [2024-07-10 15:50:41.493551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.213 qpair failed and we were unable to recover it. 00:27:02.213 [2024-07-10 15:50:41.493726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.213 [2024-07-10 15:50:41.493910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.213 [2024-07-10 15:50:41.493942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.213 qpair failed and we were unable to recover it. 00:27:02.213 [2024-07-10 15:50:41.494143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.213 [2024-07-10 15:50:41.494329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.213 [2024-07-10 15:50:41.494356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.213 qpair failed and we were unable to recover it. 00:27:02.213 [2024-07-10 15:50:41.494511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.213 [2024-07-10 15:50:41.494668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.214 [2024-07-10 15:50:41.494710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.214 qpair failed and we were unable to recover it. 00:27:02.214 [2024-07-10 15:50:41.494890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.214 [2024-07-10 15:50:41.495055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.214 [2024-07-10 15:50:41.495082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.214 qpair failed and we were unable to recover it. 00:27:02.214 [2024-07-10 15:50:41.495218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.214 [2024-07-10 15:50:41.495373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.214 [2024-07-10 15:50:41.495399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.214 qpair failed and we were unable to recover it. 00:27:02.214 [2024-07-10 15:50:41.495587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.214 [2024-07-10 15:50:41.495727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.214 [2024-07-10 15:50:41.495756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.214 qpair failed and we were unable to recover it. 00:27:02.214 [2024-07-10 15:50:41.495939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.214 [2024-07-10 15:50:41.496097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.214 [2024-07-10 15:50:41.496123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.214 qpair failed and we were unable to recover it. 00:27:02.214 [2024-07-10 15:50:41.496260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.214 [2024-07-10 15:50:41.496432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.214 [2024-07-10 15:50:41.496459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.214 qpair failed and we were unable to recover it. 00:27:02.214 [2024-07-10 15:50:41.496684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.214 [2024-07-10 15:50:41.496846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.496889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.215 qpair failed and we were unable to recover it. 00:27:02.215 [2024-07-10 15:50:41.497092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.497312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.497362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.215 qpair failed and we were unable to recover it. 00:27:02.215 [2024-07-10 15:50:41.497542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.497724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.497751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.215 qpair failed and we were unable to recover it. 00:27:02.215 [2024-07-10 15:50:41.497944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.498201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.498230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.215 qpair failed and we were unable to recover it. 00:27:02.215 [2024-07-10 15:50:41.498451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.498613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.498640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.215 qpair failed and we were unable to recover it. 00:27:02.215 [2024-07-10 15:50:41.498802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.498980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.499009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.215 qpair failed and we were unable to recover it. 00:27:02.215 [2024-07-10 15:50:41.499185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.499355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.499384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.215 qpair failed and we were unable to recover it. 00:27:02.215 [2024-07-10 15:50:41.499540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.499696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.499741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.215 qpair failed and we were unable to recover it. 00:27:02.215 [2024-07-10 15:50:41.499988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.500168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.500194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.215 qpair failed and we were unable to recover it. 00:27:02.215 [2024-07-10 15:50:41.500401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.500594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.500621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.215 qpair failed and we were unable to recover it. 00:27:02.215 [2024-07-10 15:50:41.500810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.500936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.500962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.215 qpair failed and we were unable to recover it. 00:27:02.215 [2024-07-10 15:50:41.501149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.215 [2024-07-10 15:50:41.501275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.501302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.216 qpair failed and we were unable to recover it. 00:27:02.216 [2024-07-10 15:50:41.501469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.501614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.501640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.216 qpair failed and we were unable to recover it. 00:27:02.216 [2024-07-10 15:50:41.501842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.502035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.502060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.216 qpair failed and we were unable to recover it. 00:27:02.216 [2024-07-10 15:50:41.502265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.502421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.502457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.216 qpair failed and we were unable to recover it. 00:27:02.216 [2024-07-10 15:50:41.502654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.502787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.502812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.216 qpair failed and we were unable to recover it. 00:27:02.216 [2024-07-10 15:50:41.503023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.503249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.503277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.216 qpair failed and we were unable to recover it. 00:27:02.216 [2024-07-10 15:50:41.503454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.503645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.503687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.216 qpair failed and we were unable to recover it. 00:27:02.216 [2024-07-10 15:50:41.503857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.504032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.504060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.216 qpair failed and we were unable to recover it. 00:27:02.216 [2024-07-10 15:50:41.504244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.504453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.504482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.216 qpair failed and we were unable to recover it. 00:27:02.216 [2024-07-10 15:50:41.504657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.504966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.505024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.216 qpair failed and we were unable to recover it. 00:27:02.216 [2024-07-10 15:50:41.505203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.505395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.505435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.216 qpair failed and we were unable to recover it. 00:27:02.216 [2024-07-10 15:50:41.505643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.505824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.505854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.216 qpair failed and we were unable to recover it. 00:27:02.216 [2024-07-10 15:50:41.506072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.506262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.506287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.216 qpair failed and we were unable to recover it. 00:27:02.216 [2024-07-10 15:50:41.506499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.506642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.506672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.216 qpair failed and we were unable to recover it. 00:27:02.216 [2024-07-10 15:50:41.506844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.507281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.507312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.216 qpair failed and we were unable to recover it. 00:27:02.216 [2024-07-10 15:50:41.507498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.507666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.507702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.216 qpair failed and we were unable to recover it. 00:27:02.216 [2024-07-10 15:50:41.507832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.508034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.508062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.216 qpair failed and we were unable to recover it. 00:27:02.216 [2024-07-10 15:50:41.508248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.216 [2024-07-10 15:50:41.508445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.508479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.217 qpair failed and we were unable to recover it. 00:27:02.217 [2024-07-10 15:50:41.508660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.508848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.508876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.217 qpair failed and we were unable to recover it. 00:27:02.217 [2024-07-10 15:50:41.509054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.509267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.509296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.217 qpair failed and we were unable to recover it. 00:27:02.217 [2024-07-10 15:50:41.509447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.509592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.509616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.217 qpair failed and we were unable to recover it. 00:27:02.217 [2024-07-10 15:50:41.509785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.509964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.510003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.217 qpair failed and we were unable to recover it. 00:27:02.217 [2024-07-10 15:50:41.510224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.510377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.510407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.217 qpair failed and we were unable to recover it. 00:27:02.217 [2024-07-10 15:50:41.510641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.510777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.510803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.217 qpair failed and we were unable to recover it. 00:27:02.217 [2024-07-10 15:50:41.510966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.511126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.511152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.217 qpair failed and we were unable to recover it. 00:27:02.217 [2024-07-10 15:50:41.511293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.511462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.511489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.217 qpair failed and we were unable to recover it. 00:27:02.217 [2024-07-10 15:50:41.511654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.511823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.511849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.217 qpair failed and we were unable to recover it. 00:27:02.217 [2024-07-10 15:50:41.511982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.512165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.512208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.217 qpair failed and we were unable to recover it. 00:27:02.217 [2024-07-10 15:50:41.512356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.512523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.217 [2024-07-10 15:50:41.512550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.217 qpair failed and we were unable to recover it. 00:27:02.218 [2024-07-10 15:50:41.512727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.218 [2024-07-10 15:50:41.512979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.218 [2024-07-10 15:50:41.513005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.218 qpair failed and we were unable to recover it. 00:27:02.218 [2024-07-10 15:50:41.513218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.218 [2024-07-10 15:50:41.513401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.218 [2024-07-10 15:50:41.513432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.218 qpair failed and we were unable to recover it. 00:27:02.218 [2024-07-10 15:50:41.513577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.218 [2024-07-10 15:50:41.513733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.218 [2024-07-10 15:50:41.513762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.218 qpair failed and we were unable to recover it. 00:27:02.218 [2024-07-10 15:50:41.513934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.218 [2024-07-10 15:50:41.514087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.218 [2024-07-10 15:50:41.514117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.218 qpair failed and we were unable to recover it. 00:27:02.218 [2024-07-10 15:50:41.514285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.218 [2024-07-10 15:50:41.514455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.218 [2024-07-10 15:50:41.514507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.218 qpair failed and we were unable to recover it. 00:27:02.218 [2024-07-10 15:50:41.514686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.218 [2024-07-10 15:50:41.514905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.218 [2024-07-10 15:50:41.514930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.218 qpair failed and we were unable to recover it. 00:27:02.218 [2024-07-10 15:50:41.515113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.218 [2024-07-10 15:50:41.515320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.218 [2024-07-10 15:50:41.515349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.218 qpair failed and we were unable to recover it. 00:27:02.218 [2024-07-10 15:50:41.515540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.218 [2024-07-10 15:50:41.515680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.218 [2024-07-10 15:50:41.515709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.218 qpair failed and we were unable to recover it. 00:27:02.218 [2024-07-10 15:50:41.515911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.218 [2024-07-10 15:50:41.516090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.218 [2024-07-10 15:50:41.516119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.218 qpair failed and we were unable to recover it. 00:27:02.218 [2024-07-10 15:50:41.516296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.218 [2024-07-10 15:50:41.516447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.218 [2024-07-10 15:50:41.516496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.218 qpair failed and we were unable to recover it. 00:27:02.218 [2024-07-10 15:50:41.516674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.219 [2024-07-10 15:50:41.516909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.219 [2024-07-10 15:50:41.516956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.219 qpair failed and we were unable to recover it. 00:27:02.219 [2024-07-10 15:50:41.517132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.219 [2024-07-10 15:50:41.517311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.219 [2024-07-10 15:50:41.517339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.219 qpair failed and we were unable to recover it. 00:27:02.219 [2024-07-10 15:50:41.517486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.219 [2024-07-10 15:50:41.517639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.219 [2024-07-10 15:50:41.517668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.219 qpair failed and we were unable to recover it. 00:27:02.219 [2024-07-10 15:50:41.517844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.219 [2024-07-10 15:50:41.517997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.219 [2024-07-10 15:50:41.518024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.219 qpair failed and we were unable to recover it. 00:27:02.219 [2024-07-10 15:50:41.518207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.219 [2024-07-10 15:50:41.518415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.219 [2024-07-10 15:50:41.518459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.219 qpair failed and we were unable to recover it. 00:27:02.219 [2024-07-10 15:50:41.518628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.219 [2024-07-10 15:50:41.518757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.219 [2024-07-10 15:50:41.518793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.219 qpair failed and we were unable to recover it. 00:27:02.219 [2024-07-10 15:50:41.519007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.219 [2024-07-10 15:50:41.519280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.219 [2024-07-10 15:50:41.519332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.219 qpair failed and we were unable to recover it. 00:27:02.219 [2024-07-10 15:50:41.519556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.219 [2024-07-10 15:50:41.519696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.219 [2024-07-10 15:50:41.519721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.219 qpair failed and we were unable to recover it. 00:27:02.219 [2024-07-10 15:50:41.519921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.219 [2024-07-10 15:50:41.520226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.219 [2024-07-10 15:50:41.520276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.220 qpair failed and we were unable to recover it. 00:27:02.220 [2024-07-10 15:50:41.520484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.520637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.520666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.220 qpair failed and we were unable to recover it. 00:27:02.220 [2024-07-10 15:50:41.520848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.521047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.521102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.220 qpair failed and we were unable to recover it. 00:27:02.220 [2024-07-10 15:50:41.521305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.521511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.521540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.220 qpair failed and we were unable to recover it. 00:27:02.220 [2024-07-10 15:50:41.521697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.521887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.521934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.220 qpair failed and we were unable to recover it. 00:27:02.220 [2024-07-10 15:50:41.522134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.522293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.522318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.220 qpair failed and we were unable to recover it. 00:27:02.220 [2024-07-10 15:50:41.522504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.522681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.522713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.220 qpair failed and we were unable to recover it. 00:27:02.220 [2024-07-10 15:50:41.522924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.523128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.523156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.220 qpair failed and we were unable to recover it. 00:27:02.220 [2024-07-10 15:50:41.523354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.523504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.523533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.220 qpair failed and we were unable to recover it. 00:27:02.220 [2024-07-10 15:50:41.523694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.523836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.523879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.220 qpair failed and we were unable to recover it. 00:27:02.220 [2024-07-10 15:50:41.524202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.524402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.524437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.220 qpair failed and we were unable to recover it. 00:27:02.220 [2024-07-10 15:50:41.524595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.220 [2024-07-10 15:50:41.524805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.221 [2024-07-10 15:50:41.524833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.221 qpair failed and we were unable to recover it. 00:27:02.221 [2024-07-10 15:50:41.525000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.221 [2024-07-10 15:50:41.525194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.221 [2024-07-10 15:50:41.525248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.221 qpair failed and we were unable to recover it. 00:27:02.221 [2024-07-10 15:50:41.525438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.221 [2024-07-10 15:50:41.525570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.221 [2024-07-10 15:50:41.525614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.221 qpair failed and we were unable to recover it. 00:27:02.221 [2024-07-10 15:50:41.525793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.221 [2024-07-10 15:50:41.525961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.221 [2024-07-10 15:50:41.525991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.221 qpair failed and we were unable to recover it. 00:27:02.221 [2024-07-10 15:50:41.526191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.221 [2024-07-10 15:50:41.526360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.221 [2024-07-10 15:50:41.526389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.221 qpair failed and we were unable to recover it. 00:27:02.221 [2024-07-10 15:50:41.526621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.221 [2024-07-10 15:50:41.526783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.221 [2024-07-10 15:50:41.526811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.221 qpair failed and we were unable to recover it. 00:27:02.221 [2024-07-10 15:50:41.526995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.221 [2024-07-10 15:50:41.527160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.221 [2024-07-10 15:50:41.527186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.221 qpair failed and we were unable to recover it. 00:27:02.221 [2024-07-10 15:50:41.527379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.221 [2024-07-10 15:50:41.527590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.221 [2024-07-10 15:50:41.527618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.221 qpair failed and we were unable to recover it. 00:27:02.221 [2024-07-10 15:50:41.527767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.221 [2024-07-10 15:50:41.527969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.221 [2024-07-10 15:50:41.527998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.222 qpair failed and we were unable to recover it. 00:27:02.222 [2024-07-10 15:50:41.528188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.222 [2024-07-10 15:50:41.528349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.222 [2024-07-10 15:50:41.528374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.222 qpair failed and we were unable to recover it. 00:27:02.222 [2024-07-10 15:50:41.528591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.222 [2024-07-10 15:50:41.528757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.222 [2024-07-10 15:50:41.528786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.222 qpair failed and we were unable to recover it. 00:27:02.222 [2024-07-10 15:50:41.528981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.222 [2024-07-10 15:50:41.529249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.222 [2024-07-10 15:50:41.529296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.222 qpair failed and we were unable to recover it. 00:27:02.222 [2024-07-10 15:50:41.529481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.222 [2024-07-10 15:50:41.529665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.222 [2024-07-10 15:50:41.529704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.222 qpair failed and we were unable to recover it. 00:27:02.222 [2024-07-10 15:50:41.529895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.222 [2024-07-10 15:50:41.530080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.222 [2024-07-10 15:50:41.530105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.222 qpair failed and we were unable to recover it. 00:27:02.222 [2024-07-10 15:50:41.530302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.223 [2024-07-10 15:50:41.530463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.223 [2024-07-10 15:50:41.530513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.223 qpair failed and we were unable to recover it. 00:27:02.223 [2024-07-10 15:50:41.530783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.223 [2024-07-10 15:50:41.530992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.223 [2024-07-10 15:50:41.531018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.223 qpair failed and we were unable to recover it. 00:27:02.223 [2024-07-10 15:50:41.531153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.223 [2024-07-10 15:50:41.531339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.223 [2024-07-10 15:50:41.531380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.223 qpair failed and we were unable to recover it. 00:27:02.223 [2024-07-10 15:50:41.531573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.223 [2024-07-10 15:50:41.531716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.223 [2024-07-10 15:50:41.531742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.223 qpair failed and we were unable to recover it. 00:27:02.223 [2024-07-10 15:50:41.531906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.223 [2024-07-10 15:50:41.532091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.223 [2024-07-10 15:50:41.532117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.223 qpair failed and we were unable to recover it. 00:27:02.223 [2024-07-10 15:50:41.532275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.223 [2024-07-10 15:50:41.532547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.223 [2024-07-10 15:50:41.532573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.223 qpair failed and we were unable to recover it. 00:27:02.223 [2024-07-10 15:50:41.532779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.532919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.532947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.224 qpair failed and we were unable to recover it. 00:27:02.224 [2024-07-10 15:50:41.533154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.533317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.533362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.224 qpair failed and we were unable to recover it. 00:27:02.224 [2024-07-10 15:50:41.533570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.533755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.533784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.224 qpair failed and we were unable to recover it. 00:27:02.224 [2024-07-10 15:50:41.534128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.534355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.534381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.224 qpair failed and we were unable to recover it. 00:27:02.224 [2024-07-10 15:50:41.534580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.534773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.534799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.224 qpair failed and we were unable to recover it. 00:27:02.224 [2024-07-10 15:50:41.534964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.535096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.535125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.224 qpair failed and we were unable to recover it. 00:27:02.224 [2024-07-10 15:50:41.535318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.535531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.535559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.224 qpair failed and we were unable to recover it. 00:27:02.224 [2024-07-10 15:50:41.535748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.535911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.535937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.224 qpair failed and we were unable to recover it. 00:27:02.224 [2024-07-10 15:50:41.536081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.536310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.536336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.224 qpair failed and we were unable to recover it. 00:27:02.224 [2024-07-10 15:50:41.536514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.536659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.536697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.224 qpair failed and we were unable to recover it. 00:27:02.224 [2024-07-10 15:50:41.536911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.224 [2024-07-10 15:50:41.537072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.225 [2024-07-10 15:50:41.537113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.225 qpair failed and we were unable to recover it. 00:27:02.225 [2024-07-10 15:50:41.537316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.225 [2024-07-10 15:50:41.537532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.225 [2024-07-10 15:50:41.537559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.225 qpair failed and we were unable to recover it. 00:27:02.225 [2024-07-10 15:50:41.537719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.225 [2024-07-10 15:50:41.537879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.225 [2024-07-10 15:50:41.537904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.225 qpair failed and we were unable to recover it. 00:27:02.225 [2024-07-10 15:50:41.538085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.225 [2024-07-10 15:50:41.538248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.225 [2024-07-10 15:50:41.538274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.225 qpair failed and we were unable to recover it. 00:27:02.225 [2024-07-10 15:50:41.538437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.225 [2024-07-10 15:50:41.538628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.225 [2024-07-10 15:50:41.538656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.225 qpair failed and we were unable to recover it. 00:27:02.225 [2024-07-10 15:50:41.538940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.225 [2024-07-10 15:50:41.539096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.225 [2024-07-10 15:50:41.539126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.225 qpair failed and we were unable to recover it. 00:27:02.225 [2024-07-10 15:50:41.539314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.225 [2024-07-10 15:50:41.539512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.225 [2024-07-10 15:50:41.539538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.225 qpair failed and we were unable to recover it. 00:27:02.226 [2024-07-10 15:50:41.539675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.226 [2024-07-10 15:50:41.539904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.226 [2024-07-10 15:50:41.539930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.226 qpair failed and we were unable to recover it. 00:27:02.226 [2024-07-10 15:50:41.540115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.226 [2024-07-10 15:50:41.540327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.226 [2024-07-10 15:50:41.540356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.226 qpair failed and we were unable to recover it. 00:27:02.226 [2024-07-10 15:50:41.540533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.226 [2024-07-10 15:50:41.540698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.226 [2024-07-10 15:50:41.540731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.226 qpair failed and we were unable to recover it. 00:27:02.226 [2024-07-10 15:50:41.540896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.226 [2024-07-10 15:50:41.541078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.226 [2024-07-10 15:50:41.541107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.226 qpair failed and we were unable to recover it. 00:27:02.226 [2024-07-10 15:50:41.541285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.226 [2024-07-10 15:50:41.541521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.226 [2024-07-10 15:50:41.541570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.226 qpair failed and we were unable to recover it. 00:27:02.226 [2024-07-10 15:50:41.541752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.226 [2024-07-10 15:50:41.542031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.226 [2024-07-10 15:50:41.542087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.226 qpair failed and we were unable to recover it. 00:27:02.226 [2024-07-10 15:50:41.542292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.226 [2024-07-10 15:50:41.542463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.226 [2024-07-10 15:50:41.542497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.226 qpair failed and we were unable to recover it. 00:27:02.226 [2024-07-10 15:50:41.542674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.226 [2024-07-10 15:50:41.542853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.226 [2024-07-10 15:50:41.542883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.226 qpair failed and we were unable to recover it. 00:27:02.226 [2024-07-10 15:50:41.543072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.226 [2024-07-10 15:50:41.543259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.226 [2024-07-10 15:50:41.543301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.227 qpair failed and we were unable to recover it. 00:27:02.227 [2024-07-10 15:50:41.543490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.227 [2024-07-10 15:50:41.543618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.227 [2024-07-10 15:50:41.543659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.227 qpair failed and we were unable to recover it. 00:27:02.227 [2024-07-10 15:50:41.543837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.227 [2024-07-10 15:50:41.544044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.227 [2024-07-10 15:50:41.544069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.227 qpair failed and we were unable to recover it. 00:27:02.227 [2024-07-10 15:50:41.544231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.227 [2024-07-10 15:50:41.544417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.227 [2024-07-10 15:50:41.544454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.227 qpair failed and we were unable to recover it. 00:27:02.227 [2024-07-10 15:50:41.544625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.227 [2024-07-10 15:50:41.544770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.227 [2024-07-10 15:50:41.544798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.227 qpair failed and we were unable to recover it. 00:27:02.227 [2024-07-10 15:50:41.544981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.227 [2024-07-10 15:50:41.545175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.227 [2024-07-10 15:50:41.545214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.227 qpair failed and we were unable to recover it. 00:27:02.227 [2024-07-10 15:50:41.545374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.227 [2024-07-10 15:50:41.545546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.227 [2024-07-10 15:50:41.545576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.227 qpair failed and we were unable to recover it. 00:27:02.227 [2024-07-10 15:50:41.545712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.227 [2024-07-10 15:50:41.545879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.227 [2024-07-10 15:50:41.545908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.227 qpair failed and we were unable to recover it. 00:27:02.227 [2024-07-10 15:50:41.546082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.227 [2024-07-10 15:50:41.546270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.228 [2024-07-10 15:50:41.546300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.228 qpair failed and we were unable to recover it. 00:27:02.228 [2024-07-10 15:50:41.546491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.228 [2024-07-10 15:50:41.546681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.228 [2024-07-10 15:50:41.546712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.228 qpair failed and we were unable to recover it. 00:27:02.228 [2024-07-10 15:50:41.546890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.228 [2024-07-10 15:50:41.547062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.228 [2024-07-10 15:50:41.547092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.228 qpair failed and we were unable to recover it. 00:27:02.228 [2024-07-10 15:50:41.547279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.228 [2024-07-10 15:50:41.547494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.228 [2024-07-10 15:50:41.547543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.228 qpair failed and we were unable to recover it. 00:27:02.228 [2024-07-10 15:50:41.547690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.228 [2024-07-10 15:50:41.547827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.228 [2024-07-10 15:50:41.547855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.228 qpair failed and we were unable to recover it. 00:27:02.228 [2024-07-10 15:50:41.548012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.228 [2024-07-10 15:50:41.548222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.228 [2024-07-10 15:50:41.548251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.229 qpair failed and we were unable to recover it. 00:27:02.229 [2024-07-10 15:50:41.548458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.229 [2024-07-10 15:50:41.548718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.229 [2024-07-10 15:50:41.548747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.229 qpair failed and we were unable to recover it. 00:27:02.229 [2024-07-10 15:50:41.548925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.229 [2024-07-10 15:50:41.549097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.229 [2024-07-10 15:50:41.549132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.229 qpair failed and we were unable to recover it. 00:27:02.229 [2024-07-10 15:50:41.549306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.229 [2024-07-10 15:50:41.549473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.508 [2024-07-10 15:50:41.549516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.508 qpair failed and we were unable to recover it. 00:27:02.508 [2024-07-10 15:50:41.549677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.508 [2024-07-10 15:50:41.549853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.508 [2024-07-10 15:50:41.549879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.508 qpair failed and we were unable to recover it. 00:27:02.508 [2024-07-10 15:50:41.550097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.508 [2024-07-10 15:50:41.550246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.508 [2024-07-10 15:50:41.550275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.508 qpair failed and we were unable to recover it. 00:27:02.508 [2024-07-10 15:50:41.550497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.508 [2024-07-10 15:50:41.550666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.508 [2024-07-10 15:50:41.550696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.508 qpair failed and we were unable to recover it. 00:27:02.508 [2024-07-10 15:50:41.550880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.508 [2024-07-10 15:50:41.551057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.508 [2024-07-10 15:50:41.551087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.508 qpair failed and we were unable to recover it. 00:27:02.508 [2024-07-10 15:50:41.551246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.508 [2024-07-10 15:50:41.551389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.508 [2024-07-10 15:50:41.551416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.508 qpair failed and we were unable to recover it. 00:27:02.508 [2024-07-10 15:50:41.551571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.508 [2024-07-10 15:50:41.551737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.508 [2024-07-10 15:50:41.551782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.551972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.552172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.552205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.552443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.552700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.552751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.552959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.553162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.553213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.553399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.553592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.553621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.553806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.553986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.554013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.554140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.554322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.554348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.554478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.554666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.554713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.554920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.555082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.555109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.555241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.555441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.555481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.555658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.555881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.555908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.556076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.556290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.556319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.556487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.556648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.556674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.556869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.557067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.557114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.557306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.557469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.557495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.557670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.557837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.557862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.558009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.558159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.558187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.558340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.558512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.558542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.558691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.558877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.558903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.559041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.559202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.559249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.559457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.559697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.559726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.559931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.560114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.560143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.560321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.560523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.560554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.560738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.560912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.560941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.561175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.561366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.561395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.561582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.561766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.561791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.561954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.562143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.562170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.562365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.562506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.562533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.562766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.563026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.563071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.563276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.563481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.563508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.563720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.563912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.563938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.509 qpair failed and we were unable to recover it. 00:27:02.509 [2024-07-10 15:50:41.564127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.509 [2024-07-10 15:50:41.564300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.564329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.564504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.564681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.564710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.564920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.565130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.565177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.565383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.565601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.565628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.565796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.565981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.566023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.566199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.566374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.566402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.566594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.566793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.566826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.567020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.567211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.567237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.567374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.567566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.567595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.567766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.567910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.567938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.568066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.568254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.568280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.568439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.568624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.568653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.568831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.568999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.569025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.569329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.569558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.569588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.569748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.569956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.569986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.570158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.570361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.570390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.570604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.570745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.570771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.570976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.571160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.571186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.571343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.571511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.571538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.571709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.571870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.571896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.572089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.572297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.572326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.572511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.572717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.572745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.572924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.573117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.573165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.573325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.573513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.573541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.573679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.573844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.573871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.574036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.574224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.574250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.574409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.574601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.574632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.574837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.575130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.575182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.575392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.575564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.575591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.575785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.576098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.576149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.510 qpair failed and we were unable to recover it. 00:27:02.510 [2024-07-10 15:50:41.576329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.576482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.510 [2024-07-10 15:50:41.576514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.576722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.576933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.576959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.577146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.577305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.577332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.577544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.577700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.577730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.577895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.578040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.578068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.578221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.578377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.578403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.578572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.578738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.578783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.579134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.579361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.579391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.579595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.579763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.579791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.579960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.580152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.580178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.580368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.580527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.580558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.580728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.580918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.580944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.581106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.581287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.581317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.581500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.581660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.581689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.581867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.582029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.582055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.582192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.582370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.582400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.582612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.582863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.582921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.583097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.583235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.583264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.583443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.583633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.583660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.583852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.584071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.584100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.584307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.584543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.584591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.584773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.584955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.584984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.585190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.585346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.585373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.585538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.585707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.585737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.585913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.586093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.586123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.586290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.586472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.586500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.586677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.586836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.586863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.587008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.587192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.587221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.587431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.587573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.587603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.587809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.588123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.588170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.588380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.588559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.588604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.511 qpair failed and we were unable to recover it. 00:27:02.511 [2024-07-10 15:50:41.588783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.511 [2024-07-10 15:50:41.589036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.589083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.589290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.589527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.589579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.589764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.589940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.589966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.590098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.590249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.590276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.590434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.590644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.590674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.590819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.591005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.591036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.591217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.591397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.591433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.591643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.591850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.591876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.592044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.592218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.592261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.592471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.592706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.592774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.592977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.593205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.593231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.593433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.593574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.593601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.593789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.594046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.594075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.594250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.594442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.594470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.594609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.594799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.594888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.595073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.595272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.595313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.595513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.595678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.595705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.595917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.596126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.596155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.596358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.596504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.596532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.596745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.596927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.596954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.597163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.597345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.597370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.597569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.597754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.597788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.597969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.598146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.598173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.598337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.598502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.598529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.598670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.598856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.598882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.599069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.599257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.599286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.599500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.599675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.599706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.599858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.600019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.600047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.600237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.600381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.600415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.600606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.600835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.512 [2024-07-10 15:50:41.600884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.512 qpair failed and we were unable to recover it. 00:27:02.512 [2024-07-10 15:50:41.601047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.601233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.601259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.601423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.601637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.601666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.601933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.602094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.602121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.602322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.602467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.602497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.602680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.602877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.602907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.603079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.603212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.603240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.603438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.603588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.603617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.603777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.603966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.603993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.604181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.604329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.604364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.604550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.604722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.604750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.604888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.605107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.605136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.605318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.605468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.605500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.605646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.605835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.605861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.606047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.606262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.606292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.606477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.606664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.606691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.606831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.606991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.607019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.607182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.607407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.607443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.607605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.607801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.607828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.608026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.608196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.608230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.608399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.608574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.608602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.608737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.608897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.608924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.609108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.609267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.609294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.609456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.609620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.609646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.609801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.609977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.610008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.610187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.610377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.513 [2024-07-10 15:50:41.610404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:02.513 qpair failed and we were unable to recover it. 00:27:02.513 [2024-07-10 15:50:41.610597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.610782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.610812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.611027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.611258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.611302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.611450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.611640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.611685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.611865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.612065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.612115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.612293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.612436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.612462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.612644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.612817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.612860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.613048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.613251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.613278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.613436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.613614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.613660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.613913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.614090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.614120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.614328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.614506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.614536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.614733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.615035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.615097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.615235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.615421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.615455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.615615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.615805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.615849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.616129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.616348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.616375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.616525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.616701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.616744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.616958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.617120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.617162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.617326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.617508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.617552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.617803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.618072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.618122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.618310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.618439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.618465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.618676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.618879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.618923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.619091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.619238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.619265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.619429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.619617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.619662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.619874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.620159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.620217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.620380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.620576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.620619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.620811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.621008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.621036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.621184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.621314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.621341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.621494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.621755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.621812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.621991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.622191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.622217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.622379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.622578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.622623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.622901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.623149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.623192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.514 qpair failed and we were unable to recover it. 00:27:02.514 [2024-07-10 15:50:41.623383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.623605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.514 [2024-07-10 15:50:41.623650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.623834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.624014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.624058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.624229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.624357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.624385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.624606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.624907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.624960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.625169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.625356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.625383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.625580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.625815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.625843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.626030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.626232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.626258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.626393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.626561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.626592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.626821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.627140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.627191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.627333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.627522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.627566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.627727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.627922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.627965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.628113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.628301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.628328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.628523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.628712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.628754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.628944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.629212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.629261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.629436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.629681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.629723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.629962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.630278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.630322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.630492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.630646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.630691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.630874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.631106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.631149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.631340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.631583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.631610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.631799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.631991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.632035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.632258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.632447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.632474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.632725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.632914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.632942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.633188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.633358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.633399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.633572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.633750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.633792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.634070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.634241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.634267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.634390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.634581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.634626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.634792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.634987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.635031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.635214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.635391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.635437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.635611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.635812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.635855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.636111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.636298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.636325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.636530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.636686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.636715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.515 qpair failed and we were unable to recover it. 00:27:02.515 [2024-07-10 15:50:41.636984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.637246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.515 [2024-07-10 15:50:41.637288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.637464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.637656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.637699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.637896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.638097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.638141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.638305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.638518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.638562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.638715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.638943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.638987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.639153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.639338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.639364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.639549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.639758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.639802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.639952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.640153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.640197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.640350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.640532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.640577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.640787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.641139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.641196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.641366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.641544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.641587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.641750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.641930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.641976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.642132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.642312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.642339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.642533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.642751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.642779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.642962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.643173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.643200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.643354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.643550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.643595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.643808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.643951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.643979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.644176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.644338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.644366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.644571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.644803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.644847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.645006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.645154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.645182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.645353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.645538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.645568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.645769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.645975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.646018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.646204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.646369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.646397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.646623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.646790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.646834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.647050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.647204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.647230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.647367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.647581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.647626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.647843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.648007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.648051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.648210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.648397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.648431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.648600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.648792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.648836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.648999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.649231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.649275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.649501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.649689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.516 [2024-07-10 15:50:41.649733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.516 qpair failed and we were unable to recover it. 00:27:02.516 [2024-07-10 15:50:41.649945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.650241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.650296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.650503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.650702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.650745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.650954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.651343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.651397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.651588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.651787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.651831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.652021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.652221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.652247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.652409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.652580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.652627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.652812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.652983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.653030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.653244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.653468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.653498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.653729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.654028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.654081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.654272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.654438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.654466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.654630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.654838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.654882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.655069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.655247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.655274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.655399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.655599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.655626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.655814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.655978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.656020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.656216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.656403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.656435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.656601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.656798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.656840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.657059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.657213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.657240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.657401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.657606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.657649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.657860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.658037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.658080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.658243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.658405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.658439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.658620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.658834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.658861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.659160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.659362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.659393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.659886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.660081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.660131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.660279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.660512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.660562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.660757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.660975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.661028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.517 [2024-07-10 15:50:41.661210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.661347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.517 [2024-07-10 15:50:41.661377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.517 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.661568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.661776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.661823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.662057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.662232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.662259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.662400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.662571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.662615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.662913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.663253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.663304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.663495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.663703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.663747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.663971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.664129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.664157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.664338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.664551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.664600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.664802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.665131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.665191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.665363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.665562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.665608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.665804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.666014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.666059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.666207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.666408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.666446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.666622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.666873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.666922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.667117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.667315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.667344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.667532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.667758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.667820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.668021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.668209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.668243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.668389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.668602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.668652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.668817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.669028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.669078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.669228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.669421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.669458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.669636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.669814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.669862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.670069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.670267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.670295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.670489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.670700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.670745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.670927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.671135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.671181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.671354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.671546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.671592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.671813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.672032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.672091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.672270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.672454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.672482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.672699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.672897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.672945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.673129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.673318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.673355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.673541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.673758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.673806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.674009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.674345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.674396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.674606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.674820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.674864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.518 [2024-07-10 15:50:41.675086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.675254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.518 [2024-07-10 15:50:41.675286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.518 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.675460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.675656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.675703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.675896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.676110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.676155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.676326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.676543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.676589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.676787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.677002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.677045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.677218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.677388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.677420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.677653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.677845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.677897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.678094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.678252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.678278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.678487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.678703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.678749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.678939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.679148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.679179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.679353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.679544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.679592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.679745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.679954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.680003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.680147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.680345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.680376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.680579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.680763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.680807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.680979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.681150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.681181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.681342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.681567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.681618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.681837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.682023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.682058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.682230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.682391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.682442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.682648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.682848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.682901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.683052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.683213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.683247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.683401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.683590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.683645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.683815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.684018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.684055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.684246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.684407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.684454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.684667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.684921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.684967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.685131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.685338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.685363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.685543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.685718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.685763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.685978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.686137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.686164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.686355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.686566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.686612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.686771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.686985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.687012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.687200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.687335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.687359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.687506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.687685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.687715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.519 [2024-07-10 15:50:41.687922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.688150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.519 [2024-07-10 15:50:41.688193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.519 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.688328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.688535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.688581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.688763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.688992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.689034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.689202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.689382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.689408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.689587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.689762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.689806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.689993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.690174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.690201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.690371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.690520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.690564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.690748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.690941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.690984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.691201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.691380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.691406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.691565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.691771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.691813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.692026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.692256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.692291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.692494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.692702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.692747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.692930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.693137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.693181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.693326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.693470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.693498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.693689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.693886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.693928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.694126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.694309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.694335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.694531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.694748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.694800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.694997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.695182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.695209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.695377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.695571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.695622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.695797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.696037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.696083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.696230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.696394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.696434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.696630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.696831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.696860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.697085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.697275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.697308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.697475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.697672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.697708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.697884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.698096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.698124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.698285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.698505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.698557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.698781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.698973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.699018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.699212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.699379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.699406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.699586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.699825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.699872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.700041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.700232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.700263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.520 [2024-07-10 15:50:41.700414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.700587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.520 [2024-07-10 15:50:41.700636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.520 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.700839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.701065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.701098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.701297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.701492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.701541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.701709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.701891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.701938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.702135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.702283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.702314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.702523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.702661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.702686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.702827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.703024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.703052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.703222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.703430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.703463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.703610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.703779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.703806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.704030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.704192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.704219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.704353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.704566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.704600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.704799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.705037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.705083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.705268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.705406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.705441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.705635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.705850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.705901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.706093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.706305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.706333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.706528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.706758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.706805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.707011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.707226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.707254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.707452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.707671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.707704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.707910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.708150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.708197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.708362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.708566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.708616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.708784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.708994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.709041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.709249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.709417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.709461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.709632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.709835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.709881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.710090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.710275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.710304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.710451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.710648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.710694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.710903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.711091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.711121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.711320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.711513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.521 [2024-07-10 15:50:41.711559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.521 qpair failed and we were unable to recover it. 00:27:02.521 [2024-07-10 15:50:41.711742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.711918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.711944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.712115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.712310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.712337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.712491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.712634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.712660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.712810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.713000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.713030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.713179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.713325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.713355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.713549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.713765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.713818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.714004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.714196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.714225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.714411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.714644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.714693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.714879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.715108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.715154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.715342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.715533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.715579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.715745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.715961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.716009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.716180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.716321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.716352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.716545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.716747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.716793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.716954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.717141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.717171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.717352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.717519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.717567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.717780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.717990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.718032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.718205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.718343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.718370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.718594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.718772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.718819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.719039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.719227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.719257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.719420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.719624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.719675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.719854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.720081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.720130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.720284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.720503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.720552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.720708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.720923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.720967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.721188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.721366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.721399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.721557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.721726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.721765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.721967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.722190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.722227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.722399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.722572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.722621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.722842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.723016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.723064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.723239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.723409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.723467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.723636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.723847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.723895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.724108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.724273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.522 [2024-07-10 15:50:41.724301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.522 qpair failed and we were unable to recover it. 00:27:02.522 [2024-07-10 15:50:41.724497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.724685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.724719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.724918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.725147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.725197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.725349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.725571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.725624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.725800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.726039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.726084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.726254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.726450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.726478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.726690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.726898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.726947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.727143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.727340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.727371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.727543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.727715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.727775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.727997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.728174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.728220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.728390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.728573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.728623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.728824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.729052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.729083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.729253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.729420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.729457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.729645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.729839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.729884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.730081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.730234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.730264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.730458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.730615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.730663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.730890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.731123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.731153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.731353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.731543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.731591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.731822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.732030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.732073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.732220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.732377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.732411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.732602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.732849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.732901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.733078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.733263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.733296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.733498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.733690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.733737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.733966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.734134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.734164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.734336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.734533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.734579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.734747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.734991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.735036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.735190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.735356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.735383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.735569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.735751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.735795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.735995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.736185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.736214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.736411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.736630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.736678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.736907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.737089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.523 [2024-07-10 15:50:41.737136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.523 qpair failed and we were unable to recover it. 00:27:02.523 [2024-07-10 15:50:41.737334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.737520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.737568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.737734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.737944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.737987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.738196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.738336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.738366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.738546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.738741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.738792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.738967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.739147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.739198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.739369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.739588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.739636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.739819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.740026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.740072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.740243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.740406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.740444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.740623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.740799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.740846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.741031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.741221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.741247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.741414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.741623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.741649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.741839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.742066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.742108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.742246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.742440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.742466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.742652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.742876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.742918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.743072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.743248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.743272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.743437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.743577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.743601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.743783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.743986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.744029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.744215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.744386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.744410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.744551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.744733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.744780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.744945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.745188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.745231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.745394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.745563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.745589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.745752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.745944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.745986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.746166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.746338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.746362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.746558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.746735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.746776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.747005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.747221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.747260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.747420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.747608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.747651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.747832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.748036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.748083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.748249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.748407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.748439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.748603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.748770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.748796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.748975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.749216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.749243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.749433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.749574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.524 [2024-07-10 15:50:41.749599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.524 qpair failed and we were unable to recover it. 00:27:02.524 [2024-07-10 15:50:41.749802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.750014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.750055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.750257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.750468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.750495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.750667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.750858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.750900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.751085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.751270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.751295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.751475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.751635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.751663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.751842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.752070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.752113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.752274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.752409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.752444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.752664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.752862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.752886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.753071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.753248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.753272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.753409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.753610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.753653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.753839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.753988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.754014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.754203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.754387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.754411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.754637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.754835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.754879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.755059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.755206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.755230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.755387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.755560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.755586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.755782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.755987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.756030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.756215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.756373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.756398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.756562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d674b0 is same with the state(5) to be set 00:27:02.525 [2024-07-10 15:50:41.756783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.756963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.757020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.757231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.757406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.757492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.757659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.757822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.757850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.758053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.758287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.758330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.758528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.758689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.758714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.758935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.759081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.759108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.759284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.759518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.759544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.759675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.759871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.759898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.760050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.760212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.760239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.760406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.760603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.760628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.760816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.760951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.760984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.525 qpair failed and we were unable to recover it. 00:27:02.525 [2024-07-10 15:50:41.761201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.761374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.525 [2024-07-10 15:50:41.761401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.761578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.761740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.761782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.761925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.762144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.762177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.762359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.762556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.762582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.762708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.762881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.762928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.763098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.763266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.763293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.763484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.763625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.763649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.763817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.763990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.764017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.764199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.764385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.764412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.764579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.764726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.764756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.764992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.765149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.765202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.765383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.765557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.765582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.765745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.765931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.765955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.766176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.766358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.766384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.766577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.766738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.766765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.766948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.767125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.767152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.767331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.767546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.767572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.767721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.767915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.767959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.768195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.768347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.768373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.768546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.768765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.768793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.769011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.769198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.769223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.769439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.769576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.769603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.769792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.769957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.769980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.770116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.770277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.770303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.770543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.770685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.770712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.770918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.771055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.771106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.771300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.771464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.771489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.771644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.771801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.771827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.526 qpair failed and we were unable to recover it. 00:27:02.526 [2024-07-10 15:50:41.772053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.772217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.526 [2024-07-10 15:50:41.772241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.772397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.772663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.772689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.772903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.773043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.773069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.773206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.773367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.773391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.773552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.773688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.773714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.773916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.774135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.774179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.774331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.774511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.774538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.774677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.774883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.774908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.775063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.775197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.775222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.775400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.775544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.775568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.776576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.776798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.776825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.777026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.777215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.777240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.777402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.777631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.777658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.777838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.778010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.778038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.778227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.778409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.778445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.778606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.778767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.778809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.778985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.779199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.779234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.779375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.779580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.779609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.779776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.779933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.779976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.780143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.780329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.780357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.780540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.780746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.780774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.780989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.781200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.781228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.781414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.781616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.781645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.781826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.782024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.782074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.782267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.782444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.782472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.782620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.782801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.782828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.783006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.783173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.783199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.783396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.783558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.783587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.783761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.783933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.783960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.784169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.784332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.784359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.784570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.784753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.784802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.527 qpair failed and we were unable to recover it. 00:27:02.527 [2024-07-10 15:50:41.785020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.785216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.527 [2024-07-10 15:50:41.785264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.785441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.785657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.785682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.785882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.786072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.786101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.786286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.786468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.786493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.786651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.786810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.786836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.786996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.787151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.787176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.787398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.787582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.787608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.787755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.787940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.787967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.788143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.788293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.788335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.788539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.788703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.788743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.788891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.789044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.789073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.789278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.789436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.789464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.789666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.789868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.789895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.790074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.790233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.790257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.790418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.790570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.790594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.790792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.790927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.790962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.791140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.791290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.791316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.791476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.791632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.791674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.791854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.791997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.792026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.792212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.792371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.792395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.792602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.792731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.792756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.792919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.793072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.793112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.793320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.793482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.793526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.793709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.793877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.793901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.794096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.794302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.794330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.794547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.794736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.794778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.794931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.795071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.795094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.795274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.795412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.795452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.795621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.795790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.795819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.795980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.796141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.796167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.796330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.796465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.796505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.528 [2024-07-10 15:50:41.796692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.796872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.528 [2024-07-10 15:50:41.796912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.528 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.797095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.797272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.797299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.797493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.797637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.797662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.797803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.797965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.797990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.798125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.798311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.798353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.798555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.798693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.798718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.798884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.799025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.799052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.799270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.799483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.799511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.799694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.799861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.799886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.800090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.800251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.800277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.800454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.800615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.800640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.800859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.801010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.801042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.801193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.801342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.801369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.801571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.801779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.801806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.801946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.802117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.802144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.802321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.802495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.802523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.802729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.802867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.802892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.803083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.803283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.803310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.803485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.803631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.803659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.803841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.804019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.804046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.804200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.804380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.804404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.804597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.804764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.804823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.805010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.805145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.805169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.805302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.805484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.805512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.805653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.805804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.805828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.805963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.806123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.806147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.806309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.806507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.806532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.806689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.806884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.806911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.807095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.807263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.807290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.807469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.807679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.807706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.807875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.808027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.808054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.808233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.808394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.529 [2024-07-10 15:50:41.808442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.529 qpair failed and we were unable to recover it. 00:27:02.529 [2024-07-10 15:50:41.808630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.808826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.808875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.809034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.809170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.809197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.809375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.809573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.809600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.809785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.809963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.809988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.810172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.810347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.810373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.810542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.810705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.810748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.810914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.811120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.811175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.811360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.811520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.811546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.811706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.811885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.811935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.812123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.812330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.812358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.812574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.812723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.812749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.812935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.813139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.813165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.813308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.813453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.813482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.813669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.813939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.813987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.814168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.814371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.814398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.814576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.814736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.814760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.814994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.815156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.815199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.815402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.815585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.815609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.815767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.815898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.815922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.816093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.816288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.816312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.816472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.816659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.816683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.816844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.816970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.816995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.817183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.817355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.817379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.817530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.817716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.817756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.817969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.818132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.818156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.818334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.818516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.530 [2024-07-10 15:50:41.818543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.530 qpair failed and we were unable to recover it. 00:27:02.530 [2024-07-10 15:50:41.818703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.818865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.818889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.819043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.819228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.819254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.819396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.819554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.819580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.819780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.819943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.819966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.820127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.820260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.820287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.820483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.820665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.820690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.820853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.821060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.821121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.821298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.821483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.821509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.821666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.821829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.821855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.822014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.822218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.822245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.822446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.822621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.822647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.822825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.823023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.823050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.823258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.823446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.823472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.823651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.823827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.823854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.824026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.824177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.824207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.824388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.824577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.824605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.824784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.824945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.824970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.825169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.825364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.825392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.825608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.825790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.825817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.825995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.826128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.826154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.826326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.826468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.826495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.826661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.826847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.826874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.827054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.827224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.827247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.827448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.827578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.827602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.827755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.827911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.827935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.828078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.828252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.828278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.828456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.828615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.828641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.828828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.828978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.829006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.829207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.829386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.829413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.829571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.829803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.829857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.830065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.830208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.531 [2024-07-10 15:50:41.830235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.531 qpair failed and we were unable to recover it. 00:27:02.531 [2024-07-10 15:50:41.830411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.830600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.830626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.830804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.830948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.830977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.831135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.831336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.831363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.831567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.831798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.831847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.832006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.832179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.832206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.832368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.832500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.832525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.832686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.832888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.832914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.833101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.833233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.833257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.833477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.833606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.833632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.833803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.833943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.833967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.834124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.834308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.834335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.834519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.834697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.834723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.834870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.835039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.835065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.835244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.835423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.835459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.835664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.835814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.835838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.835996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.836155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.836196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.836371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.836539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.836566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.836730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.836899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.836923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.837150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.837328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.837356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.837546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.837696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.837723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.837882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.838086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.838114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.838260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.838436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.838464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.838630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.838849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.838913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.839091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.839259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.839300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.839458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.839670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.839698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.839870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.840047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.840073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.840280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.840431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.840456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.840663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.840883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.840932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.841104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.841277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.841304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.841489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.841649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.841673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.841882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.842030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.842057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.532 qpair failed and we were unable to recover it. 00:27:02.532 [2024-07-10 15:50:41.842248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.532 [2024-07-10 15:50:41.842460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.842485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.842616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.842783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.842827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.843001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.843179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.843207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.843372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.843527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.843555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.843743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.843985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.844041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.844233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.844436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.844464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.844666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.844852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.844913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.845092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.845338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.845388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.845593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.845754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.845796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.845984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.846212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.846262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.846436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.846578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.846603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.846809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.847030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.847057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.847232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.847406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.847457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.847622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.847788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.847829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.848016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.848191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.848215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.848400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.848560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.848585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.848710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.848862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.848886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.849065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.849265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.849293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.849480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.849667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.849692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.849890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.850048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.850073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.850246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.850442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.850469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.850629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.850815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.850840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.851001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.851124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.851164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.851315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.851522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.851548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.851752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.851964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.851988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.852147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.852362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.852389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.852602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.852784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.852811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.852988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.853174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.853198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.853383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.853571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.853598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.853774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.853965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.853991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.854194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.854357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.854384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.533 qpair failed and we were unable to recover it. 00:27:02.533 [2024-07-10 15:50:41.854575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.533 [2024-07-10 15:50:41.854761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.854829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.855028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.855177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.855203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.855403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.855595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.855623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.855806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.855973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.856004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.856171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.856379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.856406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.856597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.856778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.856805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.857025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.857253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.857308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.857494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.857677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.857718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.857903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.858164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.858218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.858403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.858586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.858613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.858766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.858933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.858960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.859164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.859334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.859360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.859570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.859726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.859752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.859902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.860058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.860087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.860264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.860474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.860499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.860655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.860858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.860924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.861099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.861299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.861326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.861472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.861644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.861672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.861891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.862039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.862067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.862222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.862430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.862459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.862658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.862899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.862926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.863075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.863236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.863275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.863449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.863650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.863677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.863879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.864070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.864106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.864307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.864440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.864466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.864625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.864815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.534 [2024-07-10 15:50:41.864841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.534 qpair failed and we were unable to recover it. 00:27:02.534 [2024-07-10 15:50:41.865007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.865153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.865189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.535 qpair failed and we were unable to recover it. 00:27:02.535 [2024-07-10 15:50:41.865378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.865550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.865575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.535 qpair failed and we were unable to recover it. 00:27:02.535 [2024-07-10 15:50:41.865821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.866027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.866055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.535 qpair failed and we were unable to recover it. 00:27:02.535 [2024-07-10 15:50:41.866243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.866402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.866433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.535 qpair failed and we were unable to recover it. 00:27:02.535 [2024-07-10 15:50:41.866623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.866826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.866878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.535 qpair failed and we were unable to recover it. 00:27:02.535 [2024-07-10 15:50:41.867091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.867260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.867311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.535 qpair failed and we were unable to recover it. 00:27:02.535 [2024-07-10 15:50:41.867486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.867639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.867666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.535 qpair failed and we were unable to recover it. 00:27:02.535 [2024-07-10 15:50:41.867818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.867955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.867979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.535 qpair failed and we were unable to recover it. 00:27:02.535 [2024-07-10 15:50:41.868164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.868338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.868365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.535 qpair failed and we were unable to recover it. 00:27:02.535 [2024-07-10 15:50:41.868550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.868714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.868754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.535 qpair failed and we were unable to recover it. 00:27:02.535 [2024-07-10 15:50:41.868965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.869116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.869149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.535 qpair failed and we were unable to recover it. 00:27:02.535 [2024-07-10 15:50:41.869324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.869508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.535 [2024-07-10 15:50:41.869537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.535 qpair failed and we were unable to recover it. 00:27:02.806 [2024-07-10 15:50:41.869696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.806 [2024-07-10 15:50:41.869871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.806 [2024-07-10 15:50:41.869898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.806 qpair failed and we were unable to recover it. 00:27:02.806 [2024-07-10 15:50:41.870054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.806 [2024-07-10 15:50:41.870215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.806 [2024-07-10 15:50:41.870239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.806 qpair failed and we were unable to recover it. 00:27:02.806 [2024-07-10 15:50:41.870418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.806 [2024-07-10 15:50:41.870581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.806 [2024-07-10 15:50:41.870605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.806 qpair failed and we were unable to recover it. 00:27:02.806 [2024-07-10 15:50:41.870755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.870888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.870914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.871103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.871286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.871314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.871498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.871658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.871687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.871871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.872020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.872049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.872204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.872329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.872355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.872561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.872750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.872794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.872948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.873138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.873165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.873374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.873577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.873611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.873797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.873943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.873973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.874154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.874355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.874382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.874576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.874699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.874740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.874918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.875090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.875117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.875305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.875472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.875515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.875700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.875836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.875861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.876039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.876194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.876218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.876350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.876510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.876539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.876721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.876880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.876923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.877072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.877250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.877277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.877480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.877642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.877666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.877854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.878032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.878059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.878233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.878436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.878464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.878656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.878837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.878879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.879037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.879196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.879236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.879451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.879606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.879634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.879830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.879996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.880020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.880154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.880278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.880303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.880463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.880687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.880712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.880890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.881062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.881086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.881249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.881476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.881501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.881664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.881918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.881971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.882146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.882310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.882338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.882524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.882665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.882690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.882827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.882962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.882987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.883145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.883383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.883412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.883586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.883737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.883772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.883906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.884066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.884091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.884267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.884481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.884507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.884695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.884829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.884854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.885016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.885254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.885278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.885408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.885620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.885648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.885807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.885999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.886023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.886155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.886332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.886360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.886553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.886857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.886910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.887093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.887299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.887331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.887539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.887718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.887745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.887892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.888026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.888053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.888207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.888370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.888394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.888565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.888852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.888907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.889082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.889260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.807 [2024-07-10 15:50:41.889286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.807 qpair failed and we were unable to recover it. 00:27:02.807 [2024-07-10 15:50:41.889471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.889600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.889640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.889812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.889960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.889989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.890160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.890329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.890356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.890604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.890784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.890811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.891010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.891260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.891310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.891520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.891671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.891698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.891907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.892081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.892108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.892263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.892530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.892559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.892735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.892883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.892910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.893064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.893266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.893293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.893439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.893582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.893609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.893782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.894067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.894129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.894308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.894559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.894587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.894788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.894943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.894969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.895134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.895323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.895350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.895562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.895751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.895778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.895951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.896103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.896130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.896307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.896458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.896487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.896694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.896855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.896879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.897053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.897226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.897253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.897438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.897616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.897643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.897825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.897983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.898024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.898184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.898347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.898371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.898569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.898722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.898748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.898899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.899081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.899106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.899256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.899420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.899452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.899651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.899812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.899837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.899970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.900103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.900127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.900317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.900459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.900501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.900639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.900768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.900792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.900981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.901124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.901151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.901325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.901544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.901611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.901826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.902002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.902029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.902236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.902415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.902448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.902608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.902776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.902801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.902959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.903159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.903184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.903347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.903517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.903558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.903734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.903998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.904053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.904232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.904371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.904398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.904570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.904750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.904791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.904993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.905143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.905171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.905309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.905486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.905515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.905722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.905909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.905936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.906109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.906295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.906323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.906508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.906637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.906662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.906802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.906961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.906989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.907168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.907346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.907373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.907563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.907721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.907762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.907968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.908098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.908124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.908340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.908510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.908539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.908689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.908835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.908865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.909040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.909242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.909269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.909409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.909593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.909621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.909799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.909969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.909995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.910157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.910345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.910371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.910510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.910701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.910726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.910889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.911053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.911078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.911266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.911455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.911481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.808 [2024-07-10 15:50:41.911626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.911790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.808 [2024-07-10 15:50:41.911816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.808 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.911978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.912117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.912143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.912331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.912500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.912527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.912712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.912889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.912914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.913082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.913221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.913256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.913405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.913581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.913607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.913783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.913921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.913954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.914134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.914323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.914348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.914494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.914656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.914681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.914843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.915011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.915036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.915205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.915336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.915361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.915517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.915678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.915703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.915840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.916001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.916026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.916163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.916293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.916318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.916504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.916664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.916689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.916849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.917011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.917036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.917170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.917302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.917328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.917485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.917615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.917641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.917798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.917961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.917988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.918151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.918316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.918341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.918502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.918662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.918688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.918846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.918980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.919005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.919204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.919388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.919413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.919611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.919742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.919767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.919929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.920088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.920113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.920271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.920449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.920476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.920638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.920827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.920852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.921094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.921256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.921282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.921449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.921631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.921657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.921795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.921952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.921978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.922158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.922342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.922367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.922534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.922701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.922727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.922895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.923050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.923075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.923235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.923394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.923419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.923596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.923791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.923817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.924000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.924165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.924192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.924401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.924565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.924591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.924752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.924882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.924907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.925147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.925314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.925343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.925517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.925685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.925711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.925874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.926061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.926086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.926255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.926418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.926451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.926575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.926737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.926762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.926927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.927117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.927142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.927300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.927462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.927488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.927626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.927783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.927808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.927943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.928105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.928131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.928290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.928422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.928454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.809 qpair failed and we were unable to recover it. 00:27:02.809 [2024-07-10 15:50:41.928612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.809 [2024-07-10 15:50:41.928777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.928803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.928993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.929155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.929180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.929314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.929478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.929504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.929663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.929824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.929850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.930091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.930225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.930250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.930436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.930617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.930643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.930807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.930956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.930981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.931137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.931301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.931326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.931476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.931648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.931675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.931836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.931997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.932023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.932184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.932347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.932373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.932556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.932687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.932713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.932871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.933058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.933083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.933252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.933386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.933411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.933579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.933741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.933768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.933919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.934055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.934080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.934242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.934368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.934394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.934576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.934816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.934841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.934974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.935139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.935164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.935295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.935537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.935564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.935730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.935865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.935891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.936034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.936218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.936244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.936421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.936611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.936637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.936773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.936933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.936959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.937100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.937258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.937283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.937445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.937600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.937625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.937787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.937954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.937979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.938165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.938297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.938322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.938513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.938673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.938700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.938833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.938997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.939023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.939160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.939326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.939352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.939518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.939658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.939684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.939823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.939984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.940010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.940170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.940302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.940327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.940506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.940668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.940693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.940870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.941047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.941073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.941210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.941347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.941374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.941521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.941678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.941704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.941848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.942043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.942068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.942230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.942408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.942439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.942618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.942775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.942800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.942937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.943090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.943119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.943281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.943409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.943452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.943613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.943771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.943797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.943958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.944117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.944143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.944340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.944475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.944501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.944639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.944772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.944797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.944957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.945084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.945109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.945266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.945419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.945451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.945639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.945802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.945829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.945993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.946180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.946206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.946376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.946525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.946553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.946729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.946886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.946912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.947078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.947235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.947261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.947419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.947616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.947642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.947803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.947972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.947997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.948159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.948317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.948342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.810 [2024-07-10 15:50:41.948477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.948612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.810 [2024-07-10 15:50:41.948637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.810 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.948773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.948957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.948982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.949137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.949317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.949344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.949554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.949713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.949738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.949870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.950030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.950055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.950220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.950383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.950409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.950581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.950717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.950742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.950984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.951167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.951193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.951355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.951488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.951514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.951673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.951837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.951862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.951995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.952150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.952175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.952346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.952506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.952532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.952720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.952904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.952928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.953068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.953232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.953258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.953418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.953566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.953591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.953756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.953911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.953936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.954126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.954286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.954311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.954474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.954606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.954632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.954818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.954952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.954977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.955111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.955272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.955297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.955461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.955628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.955654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.955849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.955978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.956003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.956174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.956344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.956369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.956522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.956657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.956681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.956822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.956976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.957001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.957144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.957312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.957337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.957503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.957638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.957664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.957790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.957974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.957999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.958166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.958303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.958329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.958482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.958660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.958686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.958846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.958976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.959001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.959185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.959335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.959360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.959521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.959658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.959684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.959824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.959955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.959981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.960138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.960299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.960324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.960469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.960628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.960658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.960821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.961008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.961033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.961189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.961360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.961385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.961554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.961695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.961721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.961861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.962022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.962048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.962207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.962343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.962368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.962560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.962721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.962746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.962920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.963086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.963111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.963270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.963417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.963449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.963607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.963794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.963820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.963957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.964114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.964144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.964313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.964466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.964492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.964679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.964816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.964842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.964977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.965122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.965147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.965285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.965492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.965519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.965678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.965809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.965834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.965976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.966110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.966135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.966300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.966488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.966514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.811 qpair failed and we were unable to recover it. 00:27:02.811 [2024-07-10 15:50:41.966647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.811 [2024-07-10 15:50:41.966800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.966825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.966987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.967115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.967140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.967277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.967451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.967477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.967613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.967784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.967809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.967972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.968128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.968153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.968316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.968504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.968531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.968664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.968824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.968849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.968984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.969141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.969166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.969321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.969445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.969472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.969628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.969812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.969838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.970001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.970154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.970179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.970364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.970527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.970553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.970712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.970896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.970922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.971056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.971211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.971237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.971402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.971574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.971600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.971787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.971948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.971973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.972177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.972312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.972340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.972532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.972691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.972716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.972878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.973063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.973088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.973244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.973410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.973441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.973577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.973757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.973785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.973964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.974170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.974198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.974378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.974556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.974585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.974758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.974970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.975019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.975230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.975408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.975457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.975626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.975897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.975948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.976138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.976339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.976367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.976558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.976725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.976750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.976937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.977134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.977184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.977385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.977546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.977576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.977733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.977869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.977909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.978078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.978247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.978274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.978443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.978617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.978645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.978819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.979003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.979066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.979274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.979460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.979486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.979651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.979828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.979858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.980037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.980246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.980274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.980421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.980615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.980644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.980832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.980991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.981033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.981193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.981354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.981380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.981607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.981768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.981810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.982027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.982198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.982226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.982398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.982590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.982616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.982779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.982943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.982988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.983191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.983388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.983416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.983580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.983736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.983761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.983922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.984148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.984210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.984348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.984524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.984554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.984734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.984909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.984937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.985109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.985309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.985337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.985556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.985739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.985768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.985954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.986176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.986228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.986453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.986603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.986628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.986820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.986982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.987007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.987170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.987347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.987375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.987556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.987751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.987801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.987988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.988120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.988145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.988281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.988444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.988471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.988629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.988794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.988820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.989003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.989181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.812 [2024-07-10 15:50:41.989211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.812 qpair failed and we were unable to recover it. 00:27:02.812 [2024-07-10 15:50:41.989434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.989616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.989642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.989833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.989985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.990014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.990189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.990362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.990390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.990580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.990755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.990782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.990960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.991137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.991162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.991345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.991512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.991538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.991708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.991888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.991916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.992092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.992310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.992336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.992499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.992669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.992694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.992898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.993072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.993099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.993311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.993476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.993502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.993645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.993824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.993852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.994034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.994206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.994234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.994409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.994590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.994619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.994793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.994995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.995058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.995211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.995413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.995455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.995635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.995824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.995853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.996003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.996178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.996207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.996394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.996534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.996561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.996762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.996908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.996936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.997098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.997280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.997321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.997506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.997710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.997738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.997940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.998186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.998240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.998413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.998634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.998659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.998820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.998985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.999011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.999195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.999408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.999450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:41.999619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.999786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:41.999829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.000020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.000175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.000200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.000410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.000601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.000630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.000816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.001008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.001034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.001218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.001370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.001396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.001557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.001694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.001719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.001946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.002253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.002312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.002496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.002704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.002732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.002941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.003102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.003150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.003329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.003535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.003564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.003747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.003913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.003938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.004145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.004350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.004378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.004556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.004787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.004813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.004975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.005182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.005210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.005390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.005577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.005605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.005792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.005944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.005969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.006158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.006343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.006370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.006557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.006714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.006756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.006965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.007181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.007233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.007445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.007599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.007624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.007786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.008045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.008095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.008279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.008504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.008531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.008665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.008825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.008868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.009074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.009251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.009279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.009493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.009620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.009645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.009780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.009975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.010001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.010167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.010373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.010401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.010563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.010714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.010742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.010938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.011141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.011169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.011341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.011487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.011518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.011691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.011864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.011893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.813 qpair failed and we were unable to recover it. 00:27:02.813 [2024-07-10 15:50:42.012077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.012254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.813 [2024-07-10 15:50:42.012282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.012438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.012591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.012619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.012815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.012943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.012969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.013176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.013356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.013384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.013602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.013815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.013866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.014025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.014173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.014201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.014408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.014586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.014614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.014795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.014937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.014963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.015147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.015350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.015379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.015550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.015735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.015764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.015956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.016115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.016140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.016355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.016570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.016596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.016758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.016949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.016974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.017148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.017342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.017383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.017542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.017708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.017733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.017896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.018029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.018055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.018212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.018403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.018433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.018576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.018754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.018781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.018967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.019130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.019172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.019360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.019502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.019528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.019653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.019817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.019858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.020066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.020220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.020248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.020450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.020665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.020716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.020911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.021079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.021137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.021347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.021556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.021585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.021770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.021922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.021947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.022085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.022271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.022313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.022477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.022640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.022684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.022862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.023005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.023038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.023222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.023348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.023374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.023511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.023675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.023701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.023923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.024124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.024152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.024362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.024498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.024540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.024710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.024895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.024920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.025060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.025241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.025269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.025465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.025621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.025647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.025785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.025944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.025970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.026132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.026359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.026384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.026532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.026716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.026748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.026928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.027105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.027133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.027308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.027524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.027550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.027730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.027884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.027914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.028098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.028230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.028256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.028438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.028624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.028652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.028861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.029040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.029067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.029245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.029421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.029457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.029628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.029810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.029835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.030016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.030285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.030331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.030528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.030692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.030734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.030906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.031176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.031227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.031433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.031602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.031630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.031802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.031964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.031989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.032178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.032347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.032374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.032554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.032740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.032770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.814 qpair failed and we were unable to recover it. 00:27:02.814 [2024-07-10 15:50:42.032905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.814 [2024-07-10 15:50:42.033068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.033099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.033226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.033383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.033411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.033576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.033753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.033781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.033967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.034126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.034152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.034338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.034541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.034570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.034754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.034961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.034989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.035152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.035287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.035314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.035525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.035677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.035705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.035886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.036086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.036113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.036261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.036412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.036443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.036620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.036824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.036851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.037029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.037196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.037224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.037411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.037551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.037577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.037788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.038012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.038062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.038218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.038406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.038439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.038594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.038726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.038768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.038982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.039140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.039182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.039364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.039519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.039548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.039712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.039873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.039899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.040076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.040279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.040307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.040484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.040640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.040668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.040851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.041015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.041042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.041199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.041352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.041381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.041593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.041781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.041807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.041992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.042123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.042149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.042336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.042471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.042499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.042691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.042824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.042865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.043041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.043194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.043237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.043418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.043576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.043602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.043799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.043983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.044010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.044202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.044381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.044409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.044622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.044789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.044817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.045015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.045247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.045272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.045448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.045653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.045682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.045822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.046003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.046033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.046217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.046405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.046441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.046631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.046838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.046866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.047054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.047212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.047237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.047445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.047587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.047615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.047802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.047980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.048009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.048205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.048365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.048406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.048579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.048792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.048818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.049008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.049192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.049220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.049401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.049565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.049591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.049802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.049968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.049996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.050207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.050368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.050393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.050568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.050697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.050722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.050889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.051111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.051136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.051293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.051436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.051462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.051592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.051746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.051771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.051956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.052133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.052162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.052343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.052480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.052523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.052741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.052951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.052979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.053152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.053332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.053360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.053543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.053699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.053741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.053907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.054095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.054121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.054263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.054450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.054477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.054640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.054844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.054872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.815 [2024-07-10 15:50:42.055052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.055255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.815 [2024-07-10 15:50:42.055307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.815 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.055533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.055674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.055716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.055928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.056069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.056094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.056255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.056415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.056449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.056667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.056821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.056849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.056997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.057157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.057182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.057386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.057602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.057632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.057803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.057960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.057988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.058173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.058352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.058380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.058560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.058719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.058745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.058872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.059031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.059059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.059268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.059435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.059478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.059685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.059838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.059866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.060013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.060158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.060186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.060391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.060558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.060585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.060712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.060867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.060892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.061079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.061232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.061261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.061416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.061618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.061646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.061838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.061966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.061992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.062156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.062342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.062385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.062569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.062756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.062798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.063013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.063166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.063191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.063332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.063540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.063566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.063754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.063889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.063915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.064133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.064298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.064327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.064513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.064704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.064729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.064883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.065088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.065116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.065320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.065492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.065521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.065705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.065919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.065951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.066161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.066338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.066366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.066507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.066688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.066718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.066894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.067084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.067110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.067292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.067450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.067477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.067682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.067929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.067989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.068168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.068319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.068344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.068503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.068643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.068668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.068797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.068986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.069011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.069223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.069448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.069477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.069641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.069804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.069829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.069994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.070180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.070222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.070433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.070588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.070613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.070768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.070960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.071018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.071196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.071381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.071408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.071550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.071707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.071732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.071934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.072113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.072141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.072290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.072439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.072467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.072645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.072946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.073001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.073194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.073377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.073420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.073593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.073807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.073836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.074024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.074207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.074256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.074415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.074606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.074634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.074836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.074976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.075005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.075208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.075386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.075414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.075577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.075733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.075758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.816 qpair failed and we were unable to recover it. 00:27:02.816 [2024-07-10 15:50:42.075975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.076183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.816 [2024-07-10 15:50:42.076211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.076422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.076591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.076617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.076748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.076887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.076928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.077135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.077307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.077335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.077505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.077676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.077705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.077857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.077985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.078010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.078227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.078400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.078433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.078617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.078819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.078845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.079049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.079233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.079257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.079402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.079568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.079595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.079759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.079946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.079971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.080101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.080233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.080258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.080430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.080588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.080614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.080798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.080957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.080984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.081146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.081276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.081301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.081469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.081631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.081657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.081822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.081979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.082004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.082160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.082317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.082342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.082527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.082709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.082734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.082871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.083031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.083056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.083248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.083384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.083410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.083581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.083719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.083744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.083916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.084082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.084107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.084241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.084431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.084457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.084581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.084768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.084793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.084950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.085104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.085134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.085299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.085458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.085484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.085630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.085794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.085819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.085973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.086133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.086160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.086295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.086447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.086473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.086635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.086787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.086812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.086979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.087133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.087158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.087321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.087451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.087478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.087643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.087772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.087798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.087961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.088086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.088111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.088273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.088430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.088455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.088619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.088782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.088809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.088942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.089102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.089127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.089254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.089381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.089406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.089581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.089732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.089757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.089919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.090073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.090098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.090261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.090402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.090435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.090623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.090779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.090804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.090972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.091102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.091127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.091315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.091453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.091479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.091640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.091800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.091825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.091997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.092152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.092178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.092353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.092533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.092560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.092689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.092826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.092852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.093037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.093162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.093187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.093353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.093510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.093537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.093724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.093852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.093877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.094002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.094167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.094192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.094388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.094560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.094586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.094772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.094956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.094981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.095117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.095252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.095277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.095436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.095617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.095651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.095842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.096080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.096129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.096297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.096477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.096509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.096671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.096875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.817 [2024-07-10 15:50:42.096924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.817 qpair failed and we were unable to recover it. 00:27:02.817 [2024-07-10 15:50:42.097112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.097303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.097334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.097509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.097679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.097708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.097930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.098134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.098178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.098352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.098514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.098560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.098725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.098908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.098956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.099121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.099314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.099343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.099536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.099738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.099770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.099962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.100121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.100149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.100336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.100479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.100505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.100717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.101004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.101033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.101234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.101440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.101481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.101648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.101827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.101856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.102010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.102160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.102188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.102373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.102564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.102590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.102779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.102944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.102970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.103290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.103505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.103533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.103699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.103885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.103915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.104184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.104356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.104384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.104549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.104706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.104735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.104901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.105059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.105089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.105243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.105417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.105471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.105636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.105800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.105826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.105966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.106129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.106154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.106340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.106501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.106528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.106690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.106839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.106865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.107051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.107213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.107238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.107406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.107555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.107581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.107747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.107911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.107953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.108129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.108307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.108335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.108524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.108684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.108712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.108875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.109038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.109063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.109223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.109384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.109411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.109583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.109747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.109774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.109948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.110170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.110222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.110377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.110542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.110567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.110729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.110865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.110890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.111131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.111286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.111315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.111510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.111663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.111688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.111826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.112011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.112040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.112214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.112415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.112450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.112635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.112778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.112806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.112970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.113139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.113164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.113321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.113535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.113562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.113725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.113926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.113999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.114142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.114339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.114366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.114551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.114738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.114763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.114963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.115135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.115161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.115298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.115435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.115463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.115600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.115781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.115807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.115947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.116125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.116151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.116298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.116473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.116499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.116658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.116848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.818 [2024-07-10 15:50:42.116873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.818 qpair failed and we were unable to recover it. 00:27:02.818 [2024-07-10 15:50:42.117046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.117213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.117238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.117373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.117536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.117563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.117691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.117852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.117877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.118017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.118179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.118205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.118338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.118485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.118517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.118691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.118879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.118907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.119071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.119203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.119230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.119400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.119543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.119569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.119730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.119914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.119939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.120099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.120226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.120253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.120420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.120580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.120606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.120792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.120930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.120957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.121088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.121261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.121287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.121422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.121635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.121660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.121816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.121969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.121999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.122162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.122356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.122381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.122561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.122721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.122745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.122909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.123061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.123087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.123247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.123408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.123449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.123639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.123783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.123808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.123939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.124130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.124155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.124317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.124476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.124502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.124661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.124829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.124856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.125012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.125141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.125165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.125355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.125513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.125543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.125706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.125896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.125927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.126087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.126241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.126266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.126445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.126650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.126679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.126890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.127100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.127125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.127310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.127459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.127488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.127657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.127848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.127876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.128051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.128226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.128253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.128448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.128614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.128640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.128831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.129007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.129036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.129222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.129355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.129385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.129587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.129744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.129772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.129974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.130185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.130211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.130371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.130571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.130597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.130794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.130979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.131007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.131217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.131400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.131431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.131594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.131798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.131826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.131974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.132179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.132208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.132411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.132599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.132625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.132799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.132985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.133010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.133168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.133344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.133372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.133525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.133671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.133699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.133892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.134107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.134136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.134303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.134441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.134486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.134644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.134823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.134852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.135111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.135322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.135383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.135573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.135745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.135773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.135996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.136136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.136181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.136369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.136524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.136550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.136681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.136864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.136904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.137116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.137277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.137303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.137530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.137694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.137748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.137953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.138252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.138309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.138480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.138614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.138640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.138835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.139014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.819 [2024-07-10 15:50:42.139042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.819 qpair failed and we were unable to recover it. 00:27:02.819 [2024-07-10 15:50:42.139199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.139385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.139410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.139647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.139842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.139867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.140078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.140263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.140292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.140499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.140674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.140720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.140931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.141107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.141135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.141278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.141492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.141519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.141684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.141859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.141887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.142062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.142237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.142266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.142485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.142646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.142672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.142832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.143050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.143112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.143291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.143520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.143547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.143733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.143885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.143913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.144100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.144286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.144311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.144535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.144687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.144724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.144940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.145116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.145144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.145353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.145497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.145527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.145693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.145958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.146013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.146288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.146505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.146531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.146694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.146942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.147000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.147193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.147346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.147371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.147558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.147765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.147793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.148011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.148155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.148185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.148376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.148539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.148565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.148773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.148939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.148968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.149127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.149315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.149340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.149531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.149671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.149698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.149908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.150171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.150228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.150415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.150579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.150605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.150756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.150936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.150964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.151152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.151310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.151338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.151523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.151662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.151687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.151914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.152091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.152119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.152320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.152556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.152585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.152741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.152914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.152939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.153140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.153293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.153318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.153532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.153722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.153750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.153921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.154077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.154122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.154306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.154511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.154540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.154724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.154932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.154958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.155147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.155282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.155307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.155494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.155711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.155736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.155922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.156172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.156234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.156393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.156563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.156589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.156748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.156935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.156961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.157120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.157279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.157305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.157506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.157687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.157729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.157938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.158118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.158143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.158284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.158469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.158496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.158654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.158812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.158853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.159056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.159259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.159287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.159496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.159675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.159703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.159911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.160034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.160059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.160241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.160383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.160408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.820 qpair failed and we were unable to recover it. 00:27:02.820 [2024-07-10 15:50:42.160573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.820 [2024-07-10 15:50:42.160699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.160726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.160908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.161033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.161058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.161216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.161401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.161441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.161580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.161713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.161739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.161900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.162053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.162078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.162202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.162323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.162348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.162536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.162717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.162744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.162887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.163071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.163096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.163236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.163389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.163431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.163598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.163724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.163750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.163923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.164085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.164111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.164270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.164419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.164461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.164601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.164761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.164788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.164978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.165148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.165181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.165356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.165509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.165536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.165677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.165825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.165850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.166013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.166171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.166197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.166336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.166468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.166494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.166693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.166864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.166901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.167065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.167232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.167260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.167434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.167597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.167623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.167764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.167924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.167951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.168133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.168317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.168342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.168515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.168682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.168728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:02.821 [2024-07-10 15:50:42.168911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.169062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.821 [2024-07-10 15:50:42.169088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:02.821 qpair failed and we were unable to recover it. 00:27:03.102 [2024-07-10 15:50:42.169223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.169389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.169431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.102 qpair failed and we were unable to recover it. 00:27:03.102 [2024-07-10 15:50:42.169597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.169783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.169808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.102 qpair failed and we were unable to recover it. 00:27:03.102 [2024-07-10 15:50:42.169939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.170129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.170157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.102 qpair failed and we were unable to recover it. 00:27:03.102 [2024-07-10 15:50:42.170326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.170522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.170549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.102 qpair failed and we were unable to recover it. 00:27:03.102 [2024-07-10 15:50:42.170710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.170839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.170865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.102 qpair failed and we were unable to recover it. 00:27:03.102 [2024-07-10 15:50:42.170987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.171123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.171151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.102 qpair failed and we were unable to recover it. 00:27:03.102 [2024-07-10 15:50:42.171285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.171415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.171448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.102 qpair failed and we were unable to recover it. 00:27:03.102 [2024-07-10 15:50:42.171610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.171773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.171800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.102 qpair failed and we were unable to recover it. 00:27:03.102 [2024-07-10 15:50:42.171952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.172094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.172122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.102 qpair failed and we were unable to recover it. 00:27:03.102 [2024-07-10 15:50:42.172319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.172531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.172561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.102 qpair failed and we were unable to recover it. 00:27:03.102 [2024-07-10 15:50:42.172694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.172839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.172865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.102 qpair failed and we were unable to recover it. 00:27:03.102 [2024-07-10 15:50:42.173027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.173175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.173202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.102 qpair failed and we were unable to recover it. 00:27:03.102 [2024-07-10 15:50:42.173360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.173527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.173554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.102 qpair failed and we were unable to recover it. 00:27:03.102 [2024-07-10 15:50:42.173719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.173882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.173907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.102 qpair failed and we were unable to recover it. 00:27:03.102 [2024-07-10 15:50:42.174070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.174266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.174291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.102 qpair failed and we were unable to recover it. 00:27:03.102 [2024-07-10 15:50:42.174423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.174563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.174589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.102 qpair failed and we were unable to recover it. 00:27:03.102 [2024-07-10 15:50:42.174717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.174855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.174880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.102 qpair failed and we were unable to recover it. 00:27:03.102 [2024-07-10 15:50:42.175012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.175166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.175192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.102 qpair failed and we were unable to recover it. 00:27:03.102 [2024-07-10 15:50:42.175350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.175509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.175535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.102 qpair failed and we were unable to recover it. 00:27:03.102 [2024-07-10 15:50:42.175674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.102 [2024-07-10 15:50:42.175831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.175856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.176018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.176191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.176216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.176407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.176600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.176626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.176792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.176993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.177021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.177198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.177372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.177400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.177593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.177768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.177797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.177977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.178197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.178223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.178401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.178559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.178588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.178773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.178933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.178975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.179159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.179345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.179385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.179588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.179896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.179950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.180163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.180339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.180367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.180549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.180687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.180713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.180873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.181006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.181031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.181186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.181341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.181369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.181525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.181685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.181726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.181925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.182262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.182317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.182518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.182677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.182702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.182865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.183144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.183198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.183401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.183591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.183616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.183778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.183955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.183983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.184238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.184447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.184475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.184658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.184967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.185029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.185246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.185412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.185449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.185592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.185716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.185742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.185963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.186164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.186245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.186433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.186600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.186626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.186804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.187010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.187036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.103 qpair failed and we were unable to recover it. 00:27:03.103 [2024-07-10 15:50:42.187218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.103 [2024-07-10 15:50:42.187395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.187440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.187624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.187759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.187785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.187970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.188202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.188253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.188476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.188653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.188681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.188865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.189055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.189110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.189314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.189496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.189522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.189684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.189877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.189906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.190117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.190300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.190328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.190512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.190652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.190680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.190867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.191006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.191035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.191186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.191388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.191433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.191586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.191776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.191801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.191964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.192146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.192172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.192362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.192526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.192556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.192715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.192976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.193026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.193224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.193417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.193450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.193639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.193851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.193879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.194095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.194263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.194305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.194530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.194656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.194682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.194872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.194995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.195020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.195223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.195385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.195432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.195617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.195802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.195830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.196002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.196139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.196164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.196335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.196545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.196575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.196793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.197101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.197159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.197375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.197527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.197556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.197759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.198060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.104 [2024-07-10 15:50:42.198130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.104 qpair failed and we were unable to recover it. 00:27:03.104 [2024-07-10 15:50:42.198327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.198496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.198526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.198711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.198850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.198896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.199103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.199295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.199325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.199506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.199683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.199722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.199904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.200088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.200115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.200292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.200450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.200480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.200671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.200905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.200957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.201143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.201349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.201376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.201565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.201802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.201859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.202039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.202241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.202268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.202458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.202666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.202695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.202880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.203085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.203110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.203292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.203470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.203498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.203714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.203851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.203876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.204037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.204214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.204242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.204421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.204619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.204644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.204831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.205039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.205067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.205286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.205500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.205529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.205693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.205853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.205880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.206104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.206289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.206317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.206509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.206674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.206699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.206875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.207060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.207086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.207246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.207430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.207460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.207641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.207867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.207919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.208114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.208295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.208336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.208527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.208725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.208752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.208966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.209254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.209303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.105 qpair failed and we were unable to recover it. 00:27:03.105 [2024-07-10 15:50:42.209525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.209670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.105 [2024-07-10 15:50:42.209715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.209885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.210043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.210069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.210248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.210374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.210406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.210654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.210810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.210836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.211007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.211214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.211242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.211379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.211549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.211575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.211707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.211838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.211863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.212022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.212226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.212253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.212452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.212622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.212662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.212876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.213196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.213254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.213484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.213648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.213677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.213871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.214131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.214182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.214355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.214575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.214601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.214790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.214968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.214996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.215144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.215349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.215374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.215552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.215722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.215748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.215928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.216129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.216175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.216373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.216551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.216578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.216768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.217097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.217152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.217319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.217453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.217479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.217661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.217826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.217875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.218090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.218266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.218293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.218481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.218678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.218706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.218886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.219189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.219238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.219433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.219607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.219636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.106 qpair failed and we were unable to recover it. 00:27:03.106 [2024-07-10 15:50:42.219876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.220211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.106 [2024-07-10 15:50:42.220259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.220403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.220610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.220636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.220850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.221030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.221058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.221285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.221505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.221554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.221745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.221954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.221997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.222228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.222480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.222511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.222729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.222932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.223022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.223213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.223401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.223435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.223649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.223835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.223864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.224061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.224240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.224265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.224432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.224616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.224658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.224879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.225131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.225174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.225341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.225636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.225680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.225828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.226024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.226067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.226269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.226462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.226488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.226648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.226862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.226891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.227111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.227260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.227285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.227441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.227668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.227722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.228000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.228213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.228237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.228412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.228600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.228644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.228882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.229052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.229080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.229266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.229474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.229503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.107 qpair failed and we were unable to recover it. 00:27:03.107 [2024-07-10 15:50:42.229699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.229906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.107 [2024-07-10 15:50:42.229949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.230137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.230318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.230343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.230558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.230784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.230852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.231018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.231193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.231219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.231410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.231633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.231677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.231902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.232219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.232270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.232447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.232621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.232666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.232891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.233117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.233162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.233291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.233421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.233453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.233668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.233895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.233937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.234148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.234348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.234373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.234570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.234720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.234765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.234956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.235153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.235182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.235355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.235544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.235588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.235784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.235988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.236030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.236224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.236415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.236446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.236662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.236878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.236943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.237091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.237245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.237271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.237423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.237588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.237612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.237775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.238007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.238036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.238211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.238380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.238405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.238582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.238743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.238785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.238938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.239086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.239112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.239274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.239435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.239461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.239646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.239838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.108 [2024-07-10 15:50:42.239881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.108 qpair failed and we were unable to recover it. 00:27:03.108 [2024-07-10 15:50:42.240037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.240196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.240221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.240408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.240599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.240643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.240824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.241049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.241092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.241216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.241378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.241403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.241634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.241890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.241944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.242092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.242273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.242297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.242505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.242732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.242774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.242950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.243149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.243192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.243355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.243539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.243584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.243786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.243986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.244041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.244248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.244400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.244430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.244595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.244818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.244861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.245049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.245251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.245294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.245469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.245693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.245736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.245923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.246098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.246139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.246327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.246511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.246555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.246770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.247006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.247067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.247257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.247443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.247470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.247685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.247885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.247928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.248093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.248255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.248281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.248493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.248640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.248667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.248851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.249053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.249095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.249261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.249423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.249464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.249630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.249827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.249870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.250084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.250260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.250285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.250431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.250599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.250645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.250864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.251065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.251108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.251270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.251431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.251457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.109 qpair failed and we were unable to recover it. 00:27:03.109 [2024-07-10 15:50:42.251627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.109 [2024-07-10 15:50:42.251812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.251853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.252043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.252185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.252211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.252402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.252627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.252671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.252861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.253062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.253104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.253294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.253457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.253490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.253669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.253876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.253920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.254133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.254334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.254359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.254566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.254729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.254783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.255000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.255147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.255172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.255358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.255566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.255608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.255839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.256151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.256207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.256385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.256594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.256638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.256853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.257059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.257102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.257238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.257392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.257445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.257632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.257867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.257908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.258171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.258341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.258367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.258568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.258838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.258882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.259061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.259269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.259294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.259508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.259716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.259757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.259910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.260078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.260104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.260266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.260476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.260505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.260700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.261031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.261079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.261269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.261405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.261446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.261631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.261892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.261936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.262104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.262266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.262292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.262453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.262725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.262774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.262962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.263141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.263166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.263318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.263514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.263541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.263707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.263884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.263925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.264085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.264245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.264271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.110 qpair failed and we were unable to recover it. 00:27:03.110 [2024-07-10 15:50:42.264407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.110 [2024-07-10 15:50:42.264605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.264648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.264863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.265059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.265102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.265266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.265455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.265482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.265663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.265838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.265881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.266079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.266254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.266279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.266408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.266627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.266670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.266825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.267038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.267080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.267239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.267395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.267436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.267597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.267831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.267874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.268085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.268263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.268289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.268454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.268645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.268690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.268940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.269287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.269346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.269536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.269717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.269747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.269918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.270080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.270108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.270309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.270507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.270533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.270738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.271018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.271070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.271271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.271466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.271492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.271714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.271878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.271906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.272082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.272257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.272287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.272474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.272637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.272662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.272828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.272987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.273015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.273156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.273303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.273336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.273496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.273634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.273659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.273931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.274158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.274185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.274369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.274601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.274627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.111 [2024-07-10 15:50:42.274896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.275131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.111 [2024-07-10 15:50:42.275159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.111 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.275402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.275578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.275603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.275844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.275997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.276024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.276203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.276381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.276410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.276605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.276803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.276827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.277016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.277195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.277223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.277369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.277559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.277590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.277754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.277885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.277926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.278140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.278310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.278337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.278509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.278672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.278698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.278921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.279087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.279115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.279327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.279515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.279556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.279752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.279918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.279958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.280147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.280384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.280412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.280564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.280746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.280774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.280983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.281170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.281199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.281441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.281645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.281675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.281899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.282091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.282117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.282343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.282554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.282581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.282768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.282912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.282939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.283131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.283299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.283327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.283510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.283674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.283700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.283896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.284140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.284201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.284374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.284556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.284582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.284766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.285045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.285109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.285364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.285544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.285570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.285756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.285931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.285962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.286110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.286263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.286293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.286528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.286716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.286744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.112 qpair failed and we were unable to recover it. 00:27:03.112 [2024-07-10 15:50:42.286931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.112 [2024-07-10 15:50:42.287104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.287131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.287366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.287576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.287602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.287780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.287967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.287995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.288173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.288366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.288392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.288646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.288824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.288852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.289002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.289204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.289229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.289455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.289660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.289689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.289848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.290061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.290119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.290277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.290474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.290502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.290683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.290904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.290942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.291166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.291368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.291392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.291599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.291811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.291838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.292042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.292214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.292241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.292419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.292612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.292637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.292770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.292931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.292978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.293181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.293394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.293440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.293636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.293865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.293894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.294097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.294355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.294382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.294584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.294866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.294918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.295123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.295299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.295328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.295512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.295680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.295729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.295878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.296087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.296113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.296320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.296510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.296535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.296791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.296996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.297023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.297236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.297452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.297480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.297697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.297853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.297878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.298032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.298194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.298237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.298440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.298614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.298641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.298850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.299018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.299077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.299233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.299377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.299402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.113 [2024-07-10 15:50:42.299589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.299778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.113 [2024-07-10 15:50:42.299806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.113 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.299970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.300151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.300176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.300360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.300544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.300574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.300785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.300939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.300964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.301150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.301324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.301353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.301516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.301707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.301736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.301955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.302171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.302195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.302352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.302492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.302517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.302679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.302846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.302874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.303076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.303249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.303276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.303501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.303684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.303710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.303865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.303994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.304033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.304176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.304325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.304354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.304505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.304653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.304683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.304859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.305045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.305069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.305284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.305475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.305500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.305663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.305850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.305875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.306072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.306255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.306284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.306503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.306750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.306800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.306955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.307157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.307185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.307342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.307532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.307558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.307718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.307868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.307895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.308108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.308244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.308270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.308478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.308683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.308711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.308890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.309067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.309097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.309274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.309489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.309515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.309676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.309977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.310029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.310204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.310381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.310410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.310649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.310812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.310851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.311008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.311212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.311240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.114 qpair failed and we were unable to recover it. 00:27:03.114 [2024-07-10 15:50:42.311453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.114 [2024-07-10 15:50:42.311640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.311666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.311800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.311949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.311979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.312187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.312349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.312391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.312603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.312751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.312775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.312994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.313180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.313204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.313416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.313609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.313634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.313823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.314027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.314055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.314204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.314410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.314452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.314653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.314877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.314907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.315086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.315339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.315367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.315546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.315718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.315743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.315934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.316096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.316121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.316321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.316522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.316550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.316764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.316949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.316975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.317109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.317388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.317416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.317628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.317807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.317834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.318013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.318187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.318215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.318454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.318625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.318650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.318832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.319032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.319060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.319239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.319442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.319472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.319671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.319842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.319882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.320070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.320230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.320255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.320445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.320627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.320657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.320866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.321048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.321077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.321289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.321453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.321494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.321676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.321851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.321880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.322056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.322236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.322267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.322465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.322650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.322693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.322895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.323070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.323099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.115 [2024-07-10 15:50:42.323304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.323477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.115 [2024-07-10 15:50:42.323506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.115 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.323722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.323907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.323932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.324123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.324299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.324328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.324532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.324714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.324742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.324943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.325132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.325158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.325300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.325455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.325489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.325735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.325927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.325955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.326145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.326279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.326306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.326505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.326669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.326695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.326829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.326991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.327034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.327177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.327333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.327362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.327575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.327713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.327754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.327903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.328068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.328093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.328226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.328418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.328451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.328664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.328849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.328874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.329042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.329238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.329264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.329463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.329621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.329647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.329839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.330000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.330027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.330221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.330393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.330421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.330618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.330785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.330826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.330993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.331170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.331198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.331353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.331561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.331590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.331777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.331954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.331980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.332144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.332322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.332348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.332505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.332708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.332736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.332938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.333111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.333140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.116 qpair failed and we were unable to recover it. 00:27:03.116 [2024-07-10 15:50:42.333325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.116 [2024-07-10 15:50:42.333501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.333530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.333677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.333844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.333887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.334090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.334288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.334316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.334494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.334699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.334728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.334912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.335041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.335067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.335265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.335409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.335445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.335662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.335821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.335846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.336006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.336189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.336217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.336353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.336539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.336565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.336756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.336978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.337007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.337191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.337399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.337432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.337640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.337816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.337844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.338029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.338197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.338225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.338409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.338629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.338662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.338840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.339041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.339069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.339239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.339450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.339477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.339668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.339824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.339865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.340060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.340224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.340249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.340385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.340578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.340604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.340791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.340994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.341023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.341199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.341351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.341379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.341544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.341684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.341712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.341866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.342019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.342060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.342264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.342414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.342456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.342670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.342872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.342900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.343052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.343215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.343256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.117 qpair failed and we were unable to recover it. 00:27:03.117 [2024-07-10 15:50:42.343437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.117 [2024-07-10 15:50:42.343610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.343639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.343824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.344004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.344033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.344214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.344393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.344421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.344617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.344803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.344844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.345019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.345209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.345238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.345401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.345546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.345572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.345754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.345915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.345941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.346109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.346301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.346334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.346489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.346690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.346718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.346905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.347055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.347081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.347215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.347376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.347402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.347596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.347757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.347783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.347938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.348100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.348126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.348265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.348434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.348468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.348609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.348790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.348818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.349028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.349188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.349232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.349377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.349540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.349569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.349730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.349914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.349943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.350074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.350302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.350328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.350482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.350648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.350691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.350848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.351009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.351034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.351247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.351435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.351475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.351659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.351862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.351890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.352092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.352289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.352314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.352474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.352644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.352674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.352858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.353060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.353088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.353275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.353506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.353536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.353708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.353890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.353919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.118 [2024-07-10 15:50:42.354077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.354258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.118 [2024-07-10 15:50:42.354286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.118 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.354502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.354691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.354717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.354857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.355035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.355063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.355210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.355411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.355443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.355653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.355928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.355967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.356148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.356334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.356362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.356544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.356720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.356749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.356933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.357133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.357161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.357370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.357561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.357587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.357770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.357966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.357994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.358190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.358357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.358386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.358619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.358813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.358838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.358998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.359185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.359210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.359369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.359613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.359642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.359816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.359992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.360021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.360174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.360346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.360374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.360550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.360744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.360770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.360992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.361189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.361215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.361378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.361586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.361615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.361778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.361939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.361980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.362158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.362325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.362354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.362556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.362703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.362731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.362885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.363042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.363069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.363281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.363435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.363464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.363656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.363917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.363945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.364116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.364324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.364349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.364573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.364785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.364810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.365020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.365253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.365282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.365465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.365646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.365674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.365880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.366097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.366122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.366381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.366574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.366600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.119 qpair failed and we were unable to recover it. 00:27:03.119 [2024-07-10 15:50:42.366843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.367026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.119 [2024-07-10 15:50:42.367053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.367202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.367369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.367396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.367581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.367730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.367758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.367914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.368074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.368115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.368338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.368517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.368543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.368742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.368944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.368972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.369178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.369389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.369417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.369570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.369773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.369801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.369957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.370160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.370189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.370398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.370592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.370618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.370901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.371157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.371197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.371393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.371544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.371572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.371753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.371932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.371996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.372201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.372343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.372372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.372548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.372725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.372755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.372950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.373110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.373139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.373318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.373496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.373525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.373669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.373847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.373876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.374052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.374256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.374284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.374458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.374637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.374665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.374874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.375073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.375101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.375278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.375651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.375680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.375884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.376055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.376083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.376263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.376415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.376451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.120 qpair failed and we were unable to recover it. 00:27:03.120 [2024-07-10 15:50:42.376646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.376803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.120 [2024-07-10 15:50:42.376828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.376991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.377215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.377243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.377390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.377598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.377627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.377841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.378102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.378131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.378338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.378518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.378547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.378753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.378982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.379013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.379193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.379332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.379357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.379547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.379736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.379763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.379949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.380213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.380253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.380451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.380618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.380659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.380850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.381040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.381064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.381447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.381644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.381669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.381873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.382034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.382058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.382220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.382398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.382432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.382585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.382780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.382820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.383016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.383233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.383282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.383464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.383640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.383669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.383846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.384033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.384062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.384278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.384436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.384462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.384603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.384792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.384831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.385088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.385283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.385307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.385470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.385632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.385658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.385985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.386293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.386344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.386541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.386753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.386778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.386987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.387151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.387175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.387388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.387620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.387646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.387820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.387994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.388034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.388227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.388384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.388409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.388631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.388871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.121 [2024-07-10 15:50:42.388922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.121 qpair failed and we were unable to recover it. 00:27:03.121 [2024-07-10 15:50:42.389267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.389505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.389535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.389685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.389849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.389889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.390117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.390269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.390297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.390488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.390696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.390724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.390993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.391150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.391175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.391343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.391497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.391528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.391707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.391978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.392028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.392206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.392361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.392385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.392540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.392679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.392719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.392946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.393122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.393151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.393302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.393475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.393520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.393698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.393891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.393917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.394126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.394344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.394373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.394530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.394717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.394758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.394961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.395188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.395213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.395410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.395571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.395597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.395750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.395948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.395988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.396184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.396392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.396417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.396574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.396725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.396754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.396991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.397163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.397204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.397368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.397599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.397644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.397847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.398079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.398102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.398292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.398466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.398493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.398680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.399002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.399060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.399295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.399434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.399460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.399653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.399841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.399866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.400049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.400234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.400264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.400416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.400604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.400632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.122 qpair failed and we were unable to recover it. 00:27:03.122 [2024-07-10 15:50:42.400810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.122 [2024-07-10 15:50:42.401016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.401045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.401244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.401414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.401447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.401627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.401809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.401838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.402059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.402245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.402270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.402430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.402585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.402615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.402844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.403047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.403135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.403320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.403490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.403529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.403751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.403951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.403975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.404180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.404362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.404395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.404545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.404741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.404766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.404966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.405185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.405211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.405349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.405554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.405579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.405759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.405958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.405999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.406167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.406372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.406399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.406606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.406827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.406858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.407017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.407228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.407257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.407464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.407639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.407667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.407855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.408055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.408083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.408291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.408458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.408489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.408653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.408877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.408906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.409132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.409333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.409361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.409570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.409697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.409724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.409886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.410077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.410103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.410249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.410420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.410451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.410646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.410782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.410808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.410976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.411133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.411160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.411295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.411462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.411489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.411647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.411780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.411805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.411937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.412110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.412140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.412278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.412410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.412440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.412605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.412794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.412820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.123 qpair failed and we were unable to recover it. 00:27:03.123 [2024-07-10 15:50:42.412996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.123 [2024-07-10 15:50:42.413181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.413208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.413371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.413516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.413544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.413710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.413882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.413910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.414095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.414250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.414276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.414409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.414578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.414605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.414769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.414956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.414981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.415147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.415309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.415335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.415473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.415613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.415643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.415815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.415952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.415978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.416140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.416275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.416301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.416474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.416605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.416632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.416790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.416958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.416984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.417123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.417308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.417334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.417523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.417661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.417687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.417854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.418013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.418040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.418227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.418392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.418418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.418586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.418752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.418778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.418978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.419172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.419198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.419400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.419574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.419600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.419763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.419950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.419976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.420108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.420298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.420324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.420490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.420653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.420680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.420872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.421031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.421059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.421223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.421357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.421383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.421529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.421692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.421718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.421877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.422063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.422089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.422254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.422387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.422412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.422563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.422729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.422755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.124 [2024-07-10 15:50:42.422918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.423108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.124 [2024-07-10 15:50:42.423134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.124 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.423275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.423444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.423471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.423634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.423797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.423824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.424014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.424148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.424173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.424364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.424524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.424551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.424687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.424849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.424875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.425036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.425186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.425212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.425375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.425513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.425539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.425697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.425861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.425886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.426021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.426180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.426207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.426375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.426508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.426536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.426719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.426879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.426906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.427047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.427205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.427231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.427397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.427565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.427594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.427730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.427932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.427959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.428095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.428255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.428281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.428416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.428613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.428639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.428801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.428963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.428989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.429148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.429312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.429338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.429502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.429634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.429659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.429840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.429989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.430018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.430180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.430339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.430365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.430503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.430662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.430688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.430825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.430957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.430984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.431115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.431286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.125 [2024-07-10 15:50:42.431312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.125 qpair failed and we were unable to recover it. 00:27:03.125 [2024-07-10 15:50:42.431505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.431667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.431692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.431855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.431993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.432020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.432184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.432347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.432374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.432565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.432727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.432753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.432941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.433100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.433125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.433282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.433417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.433448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.433585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.433739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.433764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.433958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.434088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.434123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.434287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.434439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.434465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.434626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.434781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.434805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.434938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.435168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.435192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.435346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.435480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.435507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.435645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.435812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.435837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.435996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.436152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.436178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.436323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.436488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.436515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.436644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.436810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.436836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.437003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.437161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.437186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.437348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.437511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.437537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.437677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.437838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.437863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.438028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.438186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.438213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.438376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.438570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.438597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.438738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.438874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.438900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.439084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.439246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.439271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.439447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.439582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.439607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.439768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.439929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.439954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.440085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.440221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.440251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.440443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.440601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.440628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.440817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.440986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.441011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.441175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.441337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.441363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.441524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.441661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.441687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.441824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.441984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.126 [2024-07-10 15:50:42.442009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.126 qpair failed and we were unable to recover it. 00:27:03.126 [2024-07-10 15:50:42.442172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.442308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.442333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.442499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.442663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.442688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.442876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.443012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.443038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.443175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.443308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.443333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.443495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.443627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.443652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.443814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.443950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.443976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.444144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.444307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.444332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.444497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.444656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.444681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.444846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.445007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.445034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.445176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.445357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.445382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.445573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.445760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.445785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.445926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.446126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.446151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.446290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.446450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.446477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.446640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.446795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.446820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.446960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.447146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.447171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.447345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.447467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.447493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.447650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.447845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.447870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.448013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.448164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.448189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.448379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.448520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.448547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.448731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.448864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.448889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.449050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.449236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.449262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.449390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.449579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.449605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.449769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.449927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.449952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.450110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.450264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.450289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.450436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.450565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.450590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.450730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.450897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.450924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.451089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.451253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.451280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.451412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.451566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.451593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.451755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.451899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.451927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.452112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.452297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.452322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.452475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.452651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.452684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.127 qpair failed and we were unable to recover it. 00:27:03.127 [2024-07-10 15:50:42.452817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.127 [2024-07-10 15:50:42.452964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.128 [2024-07-10 15:50:42.452993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.128 qpair failed and we were unable to recover it. 00:27:03.128 [2024-07-10 15:50:42.453156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.128 [2024-07-10 15:50:42.453292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.128 [2024-07-10 15:50:42.453318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.128 qpair failed and we were unable to recover it. 00:27:03.128 [2024-07-10 15:50:42.453474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.128 [2024-07-10 15:50:42.453615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.128 [2024-07-10 15:50:42.453642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.128 qpair failed and we were unable to recover it. 00:27:03.128 [2024-07-10 15:50:42.453783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.128 [2024-07-10 15:50:42.453919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.128 [2024-07-10 15:50:42.453945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.128 qpair failed and we were unable to recover it. 00:27:03.128 [2024-07-10 15:50:42.454104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.128 [2024-07-10 15:50:42.454278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.128 [2024-07-10 15:50:42.454304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.128 qpair failed and we were unable to recover it. 00:27:03.128 [2024-07-10 15:50:42.454496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.128 [2024-07-10 15:50:42.454641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.128 [2024-07-10 15:50:42.454674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.128 qpair failed and we were unable to recover it. 00:27:03.128 [2024-07-10 15:50:42.454836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.128 [2024-07-10 15:50:42.454971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.128 [2024-07-10 15:50:42.454996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.128 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.455131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.455279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.455305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.455453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.455609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.455634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.455762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.455928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.455955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.456088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.456224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.456250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.456414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.456580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.456608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.456739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.456875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.456901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.457061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.457198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.457235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.457417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.457581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.457611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.457798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.457959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.457987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.458124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.458258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.458283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.458456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.458630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.458656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.458816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.458970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.458995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.459157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.459323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.459348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.459479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.459610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.459636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.459821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.459984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.460010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.460173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.460335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.460361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.460547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.460701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.460726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.460873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.461008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.461037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.461167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.461324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.461349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.461513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.461644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.461671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.461829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.462016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.462041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.401 [2024-07-10 15:50:42.462199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.462383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.401 [2024-07-10 15:50:42.462408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.401 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.462572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.462700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.462725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.462860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.463050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.463075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.463213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.463378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.463403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.463551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.463682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.463707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.463892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.464032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.464059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.464306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.464496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.464522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.464688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.464859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.464885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.465039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.465172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.465197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.465358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.465520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.465545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.465706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.465840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.465864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.466024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.466177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.466202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.466362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.466531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.466556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.466687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.466830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.466855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.467022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.467155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.467180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.467338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.467478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.467503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.467664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.467813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.467838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.468006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.468169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.468195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.468358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.468494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.468521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.468683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.468846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.468873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.469058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.469202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.469228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.469402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.469543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.469569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.469708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.469833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.469859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.470019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.470151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.470187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.470318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.470506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.470532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.470699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.470847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.470872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.471030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.471209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.471235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.471430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.471588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.471613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.471770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.471931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.471957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.472117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.472306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.472331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.472497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.472679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.402 [2024-07-10 15:50:42.472704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.402 qpair failed and we were unable to recover it. 00:27:03.402 [2024-07-10 15:50:42.472841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.472999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.473026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.473179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.473337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.473364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.473531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.473662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.473687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.473848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.474010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.474035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.474174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.474328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.474354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.474514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.474701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.474735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.474872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.475065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.475091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.475224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.475386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.475412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.475599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.475789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.475815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.475973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.476100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.476129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.476318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.476467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.476493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.476628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.476794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.476820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.476950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.477139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.477165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.477330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.477494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.477519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.477656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.477794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.477822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.477985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.478144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.478170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.478340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.478516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.478546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.478677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.478847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.478873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.479011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.479154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.479180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.479371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.479538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.479564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.479703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.479866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.479892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.480086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.480265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.480291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.480430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.480595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.480621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.480760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.480919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.480945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.481111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.481256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.481282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.481422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.481592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.481618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.481755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.481936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.481962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.482154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.482320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.482346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.482487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.482648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.482674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.482807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.482941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.482974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.483162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.483363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.403 [2024-07-10 15:50:42.483389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.403 qpair failed and we were unable to recover it. 00:27:03.403 [2024-07-10 15:50:42.483576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.483721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.483758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.483952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.484126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.484153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.484294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.484473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.484499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.484663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.484833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.484859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.484992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.485148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.485174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.485362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.485524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.485550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.485746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.485884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.485909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.486066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.486198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.486224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.486380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.486520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.486547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.486682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.486836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.486862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.487049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.487185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.487210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.487344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.487503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.487530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.487718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.487873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.487899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.488053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.488191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.488224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.488391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.488560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.488585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.488747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.488914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.488941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.489119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.489344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.489404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.489597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.489770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.489801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.490002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.490210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.490241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.490419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.490607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.490637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.490821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.490996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.491027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.491184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.491387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.491416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.491602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.491819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.491871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.492079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.492285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.492313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.492500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.492661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.492687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.492903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.493054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.493083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.493300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.493443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.493490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.493653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.493816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.493843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.494047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.494195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.494226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.494435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.494611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.494637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.494833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.495019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.495045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.404 qpair failed and we were unable to recover it. 00:27:03.404 [2024-07-10 15:50:42.495342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.404 [2024-07-10 15:50:42.495561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.495586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.495789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.495980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.496006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.496187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.496376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.496405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.496620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.496767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.496796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.496964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.497099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.497127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.497306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.497492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.497522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.497692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.497899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.497927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.498104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.498301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.498330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.498519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.498725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.498755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.498914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.499076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.499118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.499284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.499490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.499516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.499651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.499839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.499865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.500043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.500248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.500277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.500450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.500653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.500679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.500834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.501030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.501091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.501274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.501450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.501500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.501664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.501849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.501891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.502102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.502270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.502299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.502507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.502670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.502696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.502877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.503076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.503140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.503320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.503485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.503511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.503675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.503913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.503965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.504182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.504396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.504443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.504598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.504789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.504815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.504997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.505162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.505201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.405 qpair failed and we were unable to recover it. 00:27:03.405 [2024-07-10 15:50:42.505371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.405 [2024-07-10 15:50:42.505557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.505587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.505801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.505966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.506010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.506205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.506332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.506375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.506562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.506725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.506751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.506939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.507098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.507129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.507313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.507475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.507502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.507665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.507859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.507888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.508081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.508243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.508269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.508453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.508611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.508637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.508827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.508988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.509033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.509173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.509356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.509382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.509557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.509753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.509779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.509923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.510062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.510090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.510256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.510417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.510451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.510643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.510849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.510877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.511053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.511198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.511228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.511437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.511611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.511640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.511831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.511974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.512001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.512161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.512315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.512341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.512469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.512603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.512629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.512826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.512961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.512987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.513190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.513371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.513400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.513588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.513760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.513814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.513995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.514132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.514158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.514351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.514567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.514596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.514756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.514942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.514967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.515156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.515305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.515333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.515503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.515673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.515701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.406 qpair failed and we were unable to recover it. 00:27:03.406 [2024-07-10 15:50:42.515886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.406 [2024-07-10 15:50:42.516074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.516100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.516254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.516385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.516436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.516655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.516841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.516867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.517054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.517234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.517263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.517454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.517622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.517651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.517877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.518035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.518061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.518252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.518440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.518480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.518657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.518794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.518835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.519008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.519154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.519183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.519386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.519585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.519612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.519752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.519914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.519958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.520161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.520305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.520334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.520481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.520636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.520665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.520829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.521019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.521064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.521280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.521448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.521505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.521693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.521858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.521884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.522106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.522270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.522295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.522451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.522623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.522649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.522877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.523098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.523127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.523309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.523474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.523501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.523690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.524041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.524093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.524308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.524468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.524511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.524693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.524837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.524883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.525072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.525206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.525232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.525416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.525583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.525612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.525823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.526033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.526088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.526271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.526412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.526445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.526626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.526882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.526937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.527142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.527334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.527363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.527567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.527741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.527770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.527918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.528117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.528146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.407 qpair failed and we were unable to recover it. 00:27:03.407 [2024-07-10 15:50:42.528331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.407 [2024-07-10 15:50:42.528501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.528527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.528707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.528924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.528950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.529112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.529333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.529359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.529526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.529756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.529820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.529980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.530157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.530185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.530326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.530521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.530548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.530731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.530870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.530896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.531084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.531278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.531305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.531510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.531661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.531690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.531887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.532130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.532194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.532364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.532560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.532589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.532763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.532945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.532974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.533173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.533380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.533409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.533602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.533740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.533776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.533967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.534142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.534186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.534373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.534508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.534534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.534699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.534898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.534946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.535132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.535307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.535336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.535551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.535777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.535843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.535991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.536143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.536172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.536379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.536573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.536602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.536765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.536937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.536984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.537188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.537365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.537394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.537587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.537820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.537869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.538075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.538259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.538288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.538497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.538675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.538705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.538918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.539190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.539243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.539410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.539599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.539628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.408 [2024-07-10 15:50:42.539805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.539979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.408 [2024-07-10 15:50:42.540008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.408 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.540189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.540369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.540396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.540542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.540704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.540748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.540927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.541136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.541165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.541369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.541565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.541591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.541754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.541917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.541948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.542173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.542337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.542363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.542551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.542754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.542825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.543034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.543232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.543266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.543492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.543665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.543694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.543874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.544044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.544073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.544225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.544411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.544445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.544644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.544783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.544810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.544973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.545197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.545246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.545458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.545661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.545710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.545926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.546199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.546246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.546460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.546642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.546672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.546852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.547004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.547047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.547225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.547404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.547439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.547657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.547805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.547831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.548020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.548221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.548250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.548454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.548600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.548629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.548832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.548999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.549028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.549208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.549391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.549420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.549642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.549923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.549988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.550166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.550315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.550345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.550542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.550703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.550731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.550908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.551110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.551139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.551343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.551490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.551520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.551682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.551808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.551850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.409 qpair failed and we were unable to recover it. 00:27:03.409 [2024-07-10 15:50:42.552022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.552224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.409 [2024-07-10 15:50:42.552252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.410 qpair failed and we were unable to recover it. 00:27:03.410 [2024-07-10 15:50:42.552466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.552657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.552694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.410 qpair failed and we were unable to recover it. 00:27:03.410 [2024-07-10 15:50:42.552888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.553019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.553045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.410 qpair failed and we were unable to recover it. 00:27:03.410 [2024-07-10 15:50:42.553208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.553400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.553436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.410 qpair failed and we were unable to recover it. 00:27:03.410 [2024-07-10 15:50:42.553597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.553770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.553799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.410 qpair failed and we were unable to recover it. 00:27:03.410 [2024-07-10 15:50:42.554012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.554203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.554250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.410 qpair failed and we were unable to recover it. 00:27:03.410 [2024-07-10 15:50:42.554439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.554627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.554657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.410 qpair failed and we were unable to recover it. 00:27:03.410 [2024-07-10 15:50:42.554839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.555013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.555043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.410 qpair failed and we were unable to recover it. 00:27:03.410 [2024-07-10 15:50:42.555213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.555420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.555456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.410 qpair failed and we were unable to recover it. 00:27:03.410 [2024-07-10 15:50:42.555666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.555964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.556014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.410 qpair failed and we were unable to recover it. 00:27:03.410 [2024-07-10 15:50:42.556222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.556399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.556444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.410 qpair failed and we were unable to recover it. 00:27:03.410 [2024-07-10 15:50:42.556642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.556831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.556861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.410 qpair failed and we were unable to recover it. 00:27:03.410 [2024-07-10 15:50:42.557081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.557265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.557292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.410 qpair failed and we were unable to recover it. 00:27:03.410 [2024-07-10 15:50:42.557486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.557692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.557721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.410 qpair failed and we were unable to recover it. 00:27:03.410 [2024-07-10 15:50:42.557873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.558034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.558076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.410 qpair failed and we were unable to recover it. 00:27:03.410 [2024-07-10 15:50:42.558230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.558438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.558478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.410 qpair failed and we were unable to recover it. 00:27:03.410 [2024-07-10 15:50:42.558620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.558780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.558810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.410 qpair failed and we were unable to recover it. 00:27:03.410 [2024-07-10 15:50:42.559017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.559176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.410 [2024-07-10 15:50:42.559217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.410 qpair failed and we were unable to recover it. 00:27:03.410 [2024-07-10 15:50:42.559469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.559746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.559773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.559959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.560166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.560192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.560379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.560548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.560592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.560804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.561038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.561093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.561306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.561472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.561499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.561688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.561846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.561872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.562056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.562235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.562263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.562437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.562655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.562680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.562852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.563018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.563048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.563233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.563450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.563488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.563669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.563848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.563876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.564061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.564197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.564224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.564386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.564601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.564631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.564835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.565041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.565070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.565258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.565472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.565502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.565708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.565886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.565916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.566094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.566269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.566298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.566508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.566692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.566721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.566870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.567065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.567092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.567257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.567460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.567498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.567685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.567853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.567895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.568100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.568312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.568338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.568518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.568666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.568695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.568939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.569161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.569211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.569475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.569619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.569650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.569857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.570000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.570029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.411 qpair failed and we were unable to recover it. 00:27:03.411 [2024-07-10 15:50:42.570193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.411 [2024-07-10 15:50:42.570389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.570439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.570641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.570802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.570829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.571008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.571197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.571224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.571390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.571620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.571650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.571828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.572071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.572098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.572306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.572566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.572595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.572807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.572985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.573015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.573204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.573414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.573452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.573633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.573775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.573805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.573998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.574159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.574202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.574380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.574550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.574577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.574741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.574941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.574968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.575124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.575335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.575364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.575523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.575698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.575727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.575900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.576073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.576103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.576310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.576467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.576497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.576674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.576852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.576912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.577095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.577231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.577257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.577497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.577737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.577789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.577981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.578139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.578166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.578356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.578541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.578569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.578727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.578916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.578959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.579143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.579275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.579302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.579470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.579663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.579693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.579898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.580071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.580100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.580289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.580501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.580529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.580698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.580832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.580859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.581051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.581241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.581271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.581472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.581659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.581685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.581819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.582007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.582035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.582195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.582355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.412 [2024-07-10 15:50:42.582381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.412 qpair failed and we were unable to recover it. 00:27:03.412 [2024-07-10 15:50:42.582574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.582873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.582925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.583107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.583294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.583337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.583532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.583672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.583704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.583905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.584088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.584117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.584273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.584464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.584492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.584655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.584837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.584866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.585080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.585290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.585319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.585514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.585694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.585723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.585881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.586057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.586083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.586246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.586435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.586462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.586653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.586943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.586996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.587209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.587413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.587449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.587632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.587786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.587821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.587995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.588177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.588206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.588383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.588569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.588596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.588786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.588954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.588983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.589132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.589279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.589309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.589512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.589671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.589713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.589896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.590075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.590139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.590317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.590496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.590527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.590736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.590875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.590901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.591065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.591281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.591309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.591472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.591633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.591659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.591820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.591988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.592016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.592156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.592326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.592355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.592532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.592734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.592762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.592943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.593197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.593252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.593399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.593554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.593584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.593792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.593916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.593942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.413 qpair failed and we were unable to recover it. 00:27:03.413 [2024-07-10 15:50:42.594135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.594306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.413 [2024-07-10 15:50:42.594335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.594479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.594653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.594683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.594884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.595059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.595088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.595273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.595459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.595486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.595672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.595889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.595946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.596197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.596404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.596448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.596639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.596803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.596829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.596987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.597150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.597176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.597336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.597516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.597545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.597710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.597872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.597899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.598035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.598193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.598219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.598475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.598671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.598698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.598888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.599027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.599055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.599242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.599407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.599440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.599605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.599836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.599891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.600080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.600281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.600310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.600484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.600660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.600689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.600864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.601065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.601124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.601335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.601501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.601528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.601773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.601935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.601961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.602144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.602332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.602361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.602605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.602762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.602791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.603017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.603255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.603297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.603509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.603675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.603718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.603905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.604050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.604077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.414 qpair failed and we were unable to recover it. 00:27:03.414 [2024-07-10 15:50:42.604213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.414 [2024-07-10 15:50:42.604436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.604466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.604617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.604793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.604822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.605007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.605167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.605195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.605392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.605570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.605600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.605776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.605954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.605984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.606169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.606383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.606413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.606634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.606960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.607010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.607189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.607402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.607435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.607596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.607777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.607807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.607989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.608141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.608180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.608386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.608583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.608613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.608754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.608940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.608983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.609164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.609366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.609395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.609584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.609732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.609761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.609944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.610078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.610123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.610290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.610483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.610550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.610702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.610863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.610889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.611077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.611289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.611318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.611524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.611733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.611762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.611939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.612085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.612115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.612321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.612499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.612529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.612705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.612852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.612883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.613062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.613250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.613276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.613439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.613592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.613618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.613808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.614016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.614045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.614219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.614382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.614408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.614615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.614893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.614941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.615121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.615301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.615329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.615522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.615702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.615731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.615918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.616079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.616105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.616298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.616511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.616538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.415 qpair failed and we were unable to recover it. 00:27:03.415 [2024-07-10 15:50:42.616716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.415 [2024-07-10 15:50:42.616855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.616883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.617061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.617220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.617246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.617406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.617640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.617667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.617831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.617961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.617989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.618184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.618333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.618364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.618570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.618747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.618777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.618962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.619135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.619163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.619368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.619571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.619637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.619840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.620030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.620056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.620240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.620421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.620458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.620641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.620805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.620831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.620993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.621184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.621212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.621392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.621601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.621630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.621814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.622021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.622050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.622252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.622470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.622497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.622659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.622822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.622848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.623014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.623194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.623220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.623438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.623607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.623636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.623813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.624081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.624134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.624321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.624459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.624502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.624705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.624939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.624992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.625177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.625380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.625408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.625576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.625705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.625731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.625864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.625993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.626021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.626185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.626387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.626416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.626602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.626760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.626804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.626987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.627164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.627193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.627371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.627529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.627561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.627746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.627902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.627928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.628140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.628301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.628331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.628543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.628738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.416 [2024-07-10 15:50:42.628764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.416 qpair failed and we were unable to recover it. 00:27:03.416 [2024-07-10 15:50:42.628947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.629110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.629136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.629274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.629398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.629430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.629618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.629770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.629798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.630009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.630220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.630249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.630433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.630638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.630667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.630876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.631005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.631031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.631186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.631323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.631351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.631557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.631723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.631749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.631936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.632065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.632092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.632292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.632440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.632468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.632624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.632792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.632820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.632986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.633174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.633201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.633364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.633489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.633516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.633660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.633819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.633845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.634054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.634239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.634268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.634457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.634616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.634642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.634796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.635032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.635085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.635269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.635428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.635455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.635619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.635754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.635780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.635990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.636172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.636201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.636343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.636493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.636523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.636710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.636910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.636964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.637149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.637314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.637340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.637559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.637704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.637732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.637885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.638050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.638076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.638238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.638415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.638450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.638627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.638775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.638806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.639014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.639229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.639295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.639480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.639700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.639726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.639894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.640081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.640107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.640307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.640524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.640553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.417 qpair failed and we were unable to recover it. 00:27:03.417 [2024-07-10 15:50:42.640756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.417 [2024-07-10 15:50:42.640969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.641020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.641200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.641377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.641406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.641603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.641754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.641797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.641974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.642140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.642169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.642370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.642590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.642617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.642774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.642933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.642959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.643145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.643351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.643380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.643543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.643717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.643746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.643923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.644092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.644118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.644307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.644492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.644522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.644724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.644913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.644969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.645142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.645306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.645332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.645495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.645636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.645677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.645858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.646140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.646197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.646431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.646584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.646614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.646769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.646948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.646977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.647157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.647301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.647329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.647546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.647705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.647734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.647915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.648121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.648154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.648324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.648505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.648535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.648708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.648887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.648916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.649097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.649275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.649304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.649488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.649623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.649652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.649813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.649979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.650005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.650165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.650341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.650369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.650537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.650754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.650781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.650934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.651162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.651220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.651442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.651720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.651770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.651972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.652182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.652242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.652434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.652614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.652643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.652788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.652969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.652995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.418 qpair failed and we were unable to recover it. 00:27:03.418 [2024-07-10 15:50:42.653202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.418 [2024-07-10 15:50:42.653369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.653398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.653600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.653805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.653835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.654009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.654211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.654266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.654451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.654591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.654617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.654815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.655078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.655104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.655291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.655449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.655479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.655651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.655815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.655844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.656022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.656188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.656215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.656405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.656543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.656569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.656768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.656934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.656960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.657153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.657358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.657387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.657560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.657720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.657763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.657938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.658231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.658293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.658482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.658647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.658689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.658908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.659046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.659073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.659278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.659461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.659489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.659654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.659816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.659843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.660033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.660196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.660225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.660372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.660501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.660528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.660688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.660824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.660850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.661039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.661214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.661240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.661381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.661542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.661569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.661756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.661893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.661919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.662086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.662251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.662279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.662440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.662582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.662608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.662799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.662985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.663011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.419 qpair failed and we were unable to recover it. 00:27:03.419 [2024-07-10 15:50:42.663143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.419 [2024-07-10 15:50:42.663272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.663300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.663488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.663675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.663701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.663838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.664001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.664028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.664158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.664284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.664310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.664469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.664629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.664657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.664802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.664965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.664991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.665155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.665288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.665314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.665453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.665606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.665632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.665820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.665962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.665987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.666149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.666314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.666340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.666510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.666671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.666697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.666880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.667046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.667072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.667236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.667399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.667434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.667607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.667774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.667800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.667961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.668088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.668114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.668283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.668465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.668492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.668637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.668801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.668827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.668961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.669139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.669165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.669355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.669538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.669565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.669728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.669912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.669937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.670097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.670256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.670282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.670412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.670578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.670604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.670767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.670894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.670924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.671061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.671224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.671251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.671412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.671578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.671605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.671736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.671892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.671918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.672083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.672273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.672300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.672469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.672654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.672680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.672846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.673035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.673062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.673220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.673359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.673386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.673552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.673710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.673736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.420 qpair failed and we were unable to recover it. 00:27:03.420 [2024-07-10 15:50:42.673879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.674012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.420 [2024-07-10 15:50:42.674038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.674200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.674366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.674392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.674574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.674705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.674731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.674870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.675009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.675035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.675203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.675354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.675380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.675537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.675694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.675721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.675890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.676020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.676047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.676215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.676373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.676399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.676572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.676757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.676784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.676955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.677151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.677177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.677316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.677459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.677487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.677651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.677800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.677826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.677990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.678120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.678146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.678288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.678430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.678457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.678581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.678706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.678733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.678861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.679000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.679027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.679183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.679316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.679343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.679486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.679620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.679645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.679779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.679938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.679964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.680122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.680317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.680343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.680529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.680653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.680679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.680819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.680947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.680973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.681155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.681362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.681414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.681604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.681771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.681802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.681997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.682179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.682231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.682410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.682611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.682643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.682812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.683016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.683062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.683236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.683383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.683413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.683615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.683808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.683858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.684076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.684261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.684293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.684476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.684669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.684700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.421 qpair failed and we were unable to recover it. 00:27:03.421 [2024-07-10 15:50:42.685009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.421 [2024-07-10 15:50:42.685174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.685205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.685356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.685555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.685591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.685799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.686106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.686161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.686365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.686525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.686574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.686772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.687009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.687059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.687210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.687350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.687383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.687585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.687829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.687879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.688070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.688257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.688288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.688437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.688656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.688707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.688934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.689145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.689192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.689398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.689586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.689617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.689775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.689992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.690038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.690218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.690416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.690451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.690649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.690857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.690905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.691103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.691298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.691329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.691514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.691760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.691792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.691971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.692158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.692188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.692343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.692530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.692578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.692763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.692989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.693050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.693252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.693397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.693431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.693600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.693811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.693863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.694086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.694248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.694278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.694477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.694715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.694749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.695023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.695237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.695269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.695494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.695688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.695732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.695921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.696107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.696141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.696318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.696524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.696574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.696770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.696984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.422 [2024-07-10 15:50:42.697029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.422 qpair failed and we were unable to recover it. 00:27:03.422 [2024-07-10 15:50:42.697206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.697399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.697441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.697607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.697819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.697871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.698029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.698209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.698237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.698418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.698574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.698609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.698811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.698978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.699006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.699181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.699355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.699385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.699564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.699800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.699834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.700058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.700255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.700284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.700498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.700678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.700709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.700950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.701138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.701169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.701366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.701583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.701616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.701788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.701996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.702041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.702189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.702329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.702359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.702537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.702749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.702802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.703016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.703184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.703213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.703356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.703550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.703599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.703809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.704015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.704063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.704239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.704381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.704410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.704616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.704821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.704865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.705067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.705230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.705262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.705414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.705621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.705670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.705855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.706091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.706142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.706282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.706483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.706517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.423 qpair failed and we were unable to recover it. 00:27:03.423 [2024-07-10 15:50:42.706688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.423 [2024-07-10 15:50:42.706907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.706971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.707139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.707318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.707346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.707545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.707733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.707782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.707977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.708161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.708189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.708332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.708511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.708558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.708729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.708946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.708992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.709130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.709276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.709307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.709502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.709669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.709699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.709865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.710029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.710073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.710244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.710412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.710451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.710619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.710833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.710887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.711069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.711254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.711282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.711468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.711650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.711700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.711865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.712107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.712138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.712330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.712520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.712565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.712726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.712940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.712986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.713135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.713278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.713309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.713496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.713706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.713752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.713940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.714131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.714165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.714312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.714447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.714480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.714709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.714913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.714961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.715141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.715282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.715313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.715502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.715743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.715794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.715958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.716150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.716178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.716322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.716522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.716571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.716764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.716953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.717002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.717199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.717345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.717378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.717581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.717765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.717815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.718001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.718214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.718244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.718445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.718653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.718689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.718882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.719095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.424 [2024-07-10 15:50:42.719145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.424 qpair failed and we were unable to recover it. 00:27:03.424 [2024-07-10 15:50:42.719350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.719539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.719585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.719803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.720009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.720057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.720226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.720366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.720399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.720633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.720873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.720903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.721110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.721300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.721333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.721532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.721737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.721786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.721948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.722158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.722203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.722352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.722559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.722604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.722823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.723022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.723071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.723231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.723375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.723404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.723616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.723828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.723874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.724061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.724246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.724273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.724449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.724633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.724682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.724895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.725102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.725131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.725271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.725413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.725452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.725679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.725858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.725888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.726098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.726279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.726309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.726521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.726680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.726723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.726908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.727088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.727117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.727292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.727454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.727499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.727669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.727849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.727881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.728118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.728320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.728350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.728527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.728688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.728731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.728910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.729074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.729117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.729347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.729521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.729548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.729690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.729842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.729872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.730079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.730253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.730282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.730456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.730645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.425 [2024-07-10 15:50:42.730672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.425 qpair failed and we were unable to recover it. 00:27:03.425 [2024-07-10 15:50:42.730851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.730995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.731024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.731255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.731469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.731496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.731690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.731872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.731901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.732061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.732242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.732272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.732452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.732605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.732632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.732793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.733017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.733046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.733193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.733372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.733402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.733607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.733781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.733813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.734025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.734309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.734368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.734549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.734738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.734765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.735011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.735379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.735447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.735654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.735931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.735990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.736179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.736329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.736358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.736569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.736711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.736738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.736989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.737283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.737332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.737544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.737673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.737700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.737908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.738155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.738206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.738415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.738562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.738589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.738723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.738858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.738883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.739191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.739401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.739437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.739585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.739764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.739793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.739963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.740257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.740317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.740531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.740713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.740742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.741032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.741339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.741390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.741580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.741714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.741757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.741904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.742108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.742137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.742378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.742576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.742603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.742821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.743168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.743219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.426 [2024-07-10 15:50:42.743422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.743608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.426 [2024-07-10 15:50:42.743634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.426 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.743820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.743970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.743999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.744174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.744404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.744439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.744632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.744835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.744864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.745072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.745272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.745342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.745535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.745699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.745743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.745926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.746133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.746162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.746364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.746529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.746556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.746685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.746850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.746879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.747084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.747273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.747302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.747506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.747694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.747737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.747950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.748129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.748157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.748301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.748480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.748509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.748668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.748808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.748835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.748967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.749152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.749199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.749377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.749557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.749585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.749747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.749905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.749932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.750124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.750295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.750324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.750508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.750672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.750699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.750863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.751023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.751092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.751295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.751475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.751505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.751671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.751825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.751851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.751981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.752144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.752171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.752354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.752573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.752600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.752752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.752935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.752963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.753127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.753316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.753360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.753577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.753768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.753827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.754015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.754147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.754173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.754359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.754538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.754568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.427 qpair failed and we were unable to recover it. 00:27:03.427 [2024-07-10 15:50:42.754754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.427 [2024-07-10 15:50:42.754913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.754955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.428 [2024-07-10 15:50:42.755104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.755307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.755337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.428 [2024-07-10 15:50:42.755535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.755676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.755703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.428 [2024-07-10 15:50:42.755845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.756029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.756056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.428 [2024-07-10 15:50:42.756235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.756407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.756448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.428 [2024-07-10 15:50:42.756637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.756793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.756819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.428 [2024-07-10 15:50:42.756999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.757151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.757181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.428 [2024-07-10 15:50:42.757386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.757538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.757581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.428 [2024-07-10 15:50:42.757762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.757937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.757966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.428 [2024-07-10 15:50:42.758192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.758358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.758415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.428 [2024-07-10 15:50:42.758640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.758907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.758960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.428 [2024-07-10 15:50:42.759118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.759298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.759327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.428 [2024-07-10 15:50:42.759520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.759686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.759729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.428 [2024-07-10 15:50:42.759973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.760131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.760158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.428 [2024-07-10 15:50:42.760378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.760564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.760602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.428 [2024-07-10 15:50:42.760771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.760938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.760981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.428 [2024-07-10 15:50:42.761142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.761280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.761306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.428 [2024-07-10 15:50:42.761511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.761692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.761721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.428 [2024-07-10 15:50:42.761915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.762071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.762097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.428 [2024-07-10 15:50:42.762279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.762461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.762496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.428 [2024-07-10 15:50:42.762682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.762841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.428 [2024-07-10 15:50:42.762868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.428 qpair failed and we were unable to recover it. 00:27:03.703 [2024-07-10 15:50:42.763029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.763194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.763221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.703 qpair failed and we were unable to recover it. 00:27:03.703 [2024-07-10 15:50:42.763383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.763551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.763579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.703 qpair failed and we were unable to recover it. 00:27:03.703 [2024-07-10 15:50:42.763739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.763928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.763959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.703 qpair failed and we were unable to recover it. 00:27:03.703 [2024-07-10 15:50:42.764146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.764329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.764358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.703 qpair failed and we were unable to recover it. 00:27:03.703 [2024-07-10 15:50:42.764542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.764724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.764755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.703 qpair failed and we were unable to recover it. 00:27:03.703 [2024-07-10 15:50:42.764932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.765109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.765139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.703 qpair failed and we were unable to recover it. 00:27:03.703 [2024-07-10 15:50:42.765329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.765492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.765543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.703 qpair failed and we were unable to recover it. 00:27:03.703 [2024-07-10 15:50:42.765734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.765938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.765998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.703 qpair failed and we were unable to recover it. 00:27:03.703 [2024-07-10 15:50:42.766205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.766373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.766413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.703 qpair failed and we were unable to recover it. 00:27:03.703 [2024-07-10 15:50:42.766593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.766808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.766835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.703 qpair failed and we were unable to recover it. 00:27:03.703 [2024-07-10 15:50:42.767010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.767217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.703 [2024-07-10 15:50:42.767246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.767436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.767628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.767689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.767901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.768055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.768087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.768258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.768420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.768455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.768646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.768836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.768865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.769037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.769191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.769239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.769408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.769600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.769626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.769814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.769943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.769968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.770134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.770294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.770323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.770508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.770690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.770723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.770977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.771166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.771195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.771373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.771550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.771581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.771753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.771960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.772011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.772173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.772313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.772339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.772533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.772666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.772692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.772849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.773043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.773068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.773260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.773544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.773597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.773780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.773955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.773981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.774170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.774326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.774352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.774519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.774701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.774728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.774919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.775082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.775110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.775335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.775515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.775545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.775713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.775876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.775901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.776054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.776191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.776217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.776437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.776648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.776675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.776816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.776992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.777022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.777184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.777381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.777410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.777630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.777771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.777796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.777923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.778083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.778108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.778307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.778486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.778516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.778718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.778930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.778956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.704 qpair failed and we were unable to recover it. 00:27:03.704 [2024-07-10 15:50:42.779115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.704 [2024-07-10 15:50:42.779244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.779270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.779457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.779622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.779665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.779875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.780040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.780067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.780211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.780504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.780536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.780677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.780811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.780837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.781005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.781163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.781188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.781389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.781563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.781590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.781732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.782019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.782045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.782247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.782421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.782465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.782670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.782959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.783009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.783216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.783439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.783469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.783662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.783819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.783845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.784032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.784211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.784242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.784423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.784584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.784612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.784789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.784984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.785026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.785319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.785551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.785581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.785783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.785959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.785987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.786168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.786345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.786373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.786559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.786722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.786764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.787083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.787308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.787337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.787549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.787726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.787755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.787959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.788101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.788131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.788343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.788505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.788532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.788687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.788890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.788920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.789070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.789271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.789299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.789474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.789643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.789673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.789848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.790054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.790083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.790253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.790456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.790485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.790675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.790840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.790866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.791062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.791223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.791250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.705 [2024-07-10 15:50:42.791458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.791641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.705 [2024-07-10 15:50:42.791670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.705 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.791895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.792056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.792098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.792277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.792403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.792434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.792618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.792867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.792918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.793099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.793261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.793287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.793448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.793651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.793676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.793838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.794107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.794159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.794339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.794542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.794571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.794753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.794909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.794935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.795211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.795410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.795448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.795632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.795811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.795836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.795968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.796109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.796139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.796310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.796437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.796464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.796658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.796858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.796886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.797166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.797369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.797397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.797590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.797872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.797924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.798108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.798237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.798264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.798485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.798661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.798689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.798900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.799034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.799060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.799247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.799449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.799479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.799694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.799854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.799911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.800092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.800298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.800326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.800482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.800695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.800721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.800939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.801120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.801148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.801315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.801488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.801514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.801681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.801826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.801855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.802032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.802616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.802651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.802868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.803014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.803043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.803209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.803385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.803411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.803592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.803769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.803798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.803973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.804144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.804173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.706 qpair failed and we were unable to recover it. 00:27:03.706 [2024-07-10 15:50:42.804358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.706 [2024-07-10 15:50:42.804531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.804558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.804719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.804930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.804958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.805111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.805277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.805303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.805488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.805682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.805708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.805900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.806082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.806109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.806319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.806560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.806593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.806797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.807126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.807178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.807362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.807585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.807611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.807764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.807943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.807973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.808283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.808534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.808561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.808700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.808828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.808855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.809018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.809155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.809180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.809323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.809552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.809579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.809733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.809957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.810006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.810192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.810375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.810401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.810454] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d674b0 (9): Bad file descriptor 00:27:03.707 [2024-07-10 15:50:42.810741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.810945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.810977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.811142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.811332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.811358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.811541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.811705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.811731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.812015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.812391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.812451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.812635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.812798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.812825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.812986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.813171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.813197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.813367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.813493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.813519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.813678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.813816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.813844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.814000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.814163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.814191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.814328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.814482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.814508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.814675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.814846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.707 [2024-07-10 15:50:42.814889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.707 qpair failed and we were unable to recover it. 00:27:03.707 [2024-07-10 15:50:42.815041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.815206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.815233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.815757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.815971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.816000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.816172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.816355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.816381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.816558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.816722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.816747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.816946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.817138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.817167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.817346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.817548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.817575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.817744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.817936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.818002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.818191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.818376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.818402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.818585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.818762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.818789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.818953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.819086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.819114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.819305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.819490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.819535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.819673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.819842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.819868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.820005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.820214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.820244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.820416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.820589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.820614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.820778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.820924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.820965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.821179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.821347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.821377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.821547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.821687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.821715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.821911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.822071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.822097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.822303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.822495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.822523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.822690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.822843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.822868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.823031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.823203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.823243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.823417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.823636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.823662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.823884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.824179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.824232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.824442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.824631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.824657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.824871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.825055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.825095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.825288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.825462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.825500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.825667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.825838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.825869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.826065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.826346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.826372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.826543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.826710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.708 [2024-07-10 15:50:42.826735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.708 qpair failed and we were unable to recover it. 00:27:03.708 [2024-07-10 15:50:42.826907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.827121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.827150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.827311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.827519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.827549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.827766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.827925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.827950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.828149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.828320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.828350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.828532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.828759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.828784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.828942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.829131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.829171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.829315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.829519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.829549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.829729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.829995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.830047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.830275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.830477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.830508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.830693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.830880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.830906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.831075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.831281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.831308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.831534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.831719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.831745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.831936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.832124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.832165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.832346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.832506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.832532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.832674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.832887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.832913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.833097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.833254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.833284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.833494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.833668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.833699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.833845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.834021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.834061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.834292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.834480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.834510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.834683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.834873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.834913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.835122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.835248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.835277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.835447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.835647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.835673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.835860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.836078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.836105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.836287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.836457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.836492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.836682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.836852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.836886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.837079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.837240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.837276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.837494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.837681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.837712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.837875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.838015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.838042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.838199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.838362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.838389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.838564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.838721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.838747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.709 qpair failed and we were unable to recover it. 00:27:03.709 [2024-07-10 15:50:42.838878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.709 [2024-07-10 15:50:42.839014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.839045] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.839236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.839372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.839398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.839572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.839709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.839737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.839966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.840148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.840173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.840368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.840553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.840579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.840785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.840993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.841020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.841155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.841283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.841310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.841482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.841620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.841647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.841909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.842103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.842130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.842316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.842505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.842532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.842717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.842895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.842925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.843145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.843269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.843295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.843474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.843636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.843662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.843830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.843993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.844019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.844212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.844385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.844431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.844622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.844788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.844823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.844978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.845172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.845213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.845393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.845567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.845594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.845778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.845940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.845980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.846146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.846393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.846419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.846574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.846763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.846804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.846978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.847224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.847249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.847398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.847612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.847640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.847843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.848068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.848094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.848257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.848432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.848472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.848640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.848813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.848839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.849045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.849252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.849277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.849450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.849625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.849651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.849855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.850096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.850137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.850274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.850435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.850472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.710 qpair failed and we were unable to recover it. 00:27:03.710 [2024-07-10 15:50:42.850636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.710 [2024-07-10 15:50:42.850812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.850854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.851062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.851243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.851271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.851451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.851591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.851619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.851796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.851948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.851975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.852187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.852340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.852366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.852540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.852672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.852704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.852907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.853108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.853134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.853329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.853566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.853593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.853770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.854061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.854086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.854278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.854478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.854519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.854688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.854865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.854890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.855154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.855391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.855416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.855728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.856066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.856117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.856331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.856507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.856537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.856785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.856950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.856976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.857158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.857358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.857383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.857615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.857803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.857833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.858012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.858196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.858223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.858388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.858566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.858594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.858757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.858961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.859003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.859246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.859388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.859414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.859623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.859762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.859789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.859952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.860144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.860171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.860359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.860531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.860558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.860737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.860884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.860927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.861113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.861247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.861275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.861460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.861605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.861632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.861799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.861962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.861990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.862151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.862294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.862321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.862452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.862639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.862665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.711 qpair failed and we were unable to recover it. 00:27:03.711 [2024-07-10 15:50:42.862843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.711 [2024-07-10 15:50:42.863024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.863050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.712 qpair failed and we were unable to recover it. 00:27:03.712 [2024-07-10 15:50:42.863233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.863429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.863458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.712 qpair failed and we were unable to recover it. 00:27:03.712 [2024-07-10 15:50:42.863631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.863803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.863829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.712 qpair failed and we were unable to recover it. 00:27:03.712 [2024-07-10 15:50:42.863997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.864182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.864209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.712 qpair failed and we were unable to recover it. 00:27:03.712 [2024-07-10 15:50:42.864369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.864504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.864531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.712 qpair failed and we were unable to recover it. 00:27:03.712 [2024-07-10 15:50:42.864709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.864926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.864953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.712 qpair failed and we were unable to recover it. 00:27:03.712 [2024-07-10 15:50:42.865093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.865239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.865266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.712 qpair failed and we were unable to recover it. 00:27:03.712 [2024-07-10 15:50:42.865434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.865571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.865597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.712 qpair failed and we were unable to recover it. 00:27:03.712 [2024-07-10 15:50:42.865742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.865909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.865936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.712 qpair failed and we were unable to recover it. 00:27:03.712 [2024-07-10 15:50:42.866078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.866220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.866247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.712 qpair failed and we were unable to recover it. 00:27:03.712 [2024-07-10 15:50:42.866405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.866571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.866597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.712 qpair failed and we were unable to recover it. 00:27:03.712 [2024-07-10 15:50:42.866769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.866933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.866960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.712 qpair failed and we were unable to recover it. 00:27:03.712 [2024-07-10 15:50:42.867160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.867345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.867371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.712 qpair failed and we were unable to recover it. 00:27:03.712 [2024-07-10 15:50:42.867543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.867709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.867736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.712 qpair failed and we were unable to recover it. 00:27:03.712 [2024-07-10 15:50:42.867908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.868032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.868058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.712 qpair failed and we were unable to recover it. 00:27:03.712 [2024-07-10 15:50:42.868223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.868355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.868382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.712 qpair failed and we were unable to recover it. 00:27:03.712 [2024-07-10 15:50:42.868571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.868712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.868744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.712 qpair failed and we were unable to recover it. 00:27:03.712 [2024-07-10 15:50:42.868907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.869076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.869103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.712 qpair failed and we were unable to recover it. 00:27:03.712 [2024-07-10 15:50:42.869234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.869400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.869431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.712 qpair failed and we were unable to recover it. 00:27:03.712 [2024-07-10 15:50:42.869612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.869748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.869780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.712 qpair failed and we were unable to recover it. 00:27:03.712 [2024-07-10 15:50:42.869964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.870144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.712 [2024-07-10 15:50:42.870170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.870359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.870549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.870576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.870717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.870938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.870966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.871159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.871285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.871312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.871483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.871612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.871638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.871807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.871975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.872003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.872159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.872292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.872321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.872488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.872627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.872654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.872818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.872950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.872977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.873169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.873304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.873331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.873501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.873629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.873655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.873866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.874064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.874096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.874270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.874459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.874515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.874730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.874905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.874949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.875155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.875304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.875332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.875528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.875689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.875729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.875914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.876082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.876111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.876257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.876438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.876468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.876628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.876795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.876822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.876997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.877214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.877244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.877455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.877644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.877670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.713 [2024-07-10 15:50:42.877900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.878093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.713 [2024-07-10 15:50:42.878134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.713 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.878365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.878527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.878557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.878734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.878917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.878947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.879138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.879306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.879333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.879521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.879673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.879714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.879896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.880097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.880124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.880317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.880560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.880589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.880797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.880943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.880986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.881202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.881354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.881383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.881553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.881698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.881742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.881893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.882050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.882082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.882293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.882525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.882552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.882686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.882848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.882891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.883061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.883245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.883271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.883453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.883618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.883643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.883831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.883970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.883998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.884211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.884388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.884417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.884585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.884737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.884766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.884953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.885145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.885175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.885388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.885546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.885572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.885711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.885882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.885912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.886061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.886272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.886301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.886446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.886596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.886625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.886800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.886975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.887004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.887216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.887396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.887431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.887577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.887712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.714 [2024-07-10 15:50:42.887740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.714 qpair failed and we were unable to recover it. 00:27:03.714 [2024-07-10 15:50:42.887916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.888086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.888114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.888260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.888433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.888459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.888605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.888793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.888822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.888991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.889168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.889196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.889356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.889497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.889539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.889718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.889920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.889949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.890127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.890303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.890332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.890495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.890662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.890688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.890830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.890997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.891023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.891210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.891392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.891421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.891619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.891756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.891782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.891963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.892139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.892167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.892350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.892508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.892537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.892717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.892907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.892932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.893075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.893289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.893319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.893509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.893659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.893688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.893870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.894058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.894085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.894247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.894376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.894403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.894571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.894725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.894755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.894924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.895108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.895135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.895339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.895514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.895543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.895692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.895901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.895930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.896117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.896334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.896363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.896541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.896692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.896722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.896909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.897052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.715 [2024-07-10 15:50:42.897082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.715 qpair failed and we were unable to recover it. 00:27:03.715 [2024-07-10 15:50:42.897223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.897368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.897395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 [2024-07-10 15:50:42.897551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.897693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.897739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 [2024-07-10 15:50:42.897927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.898082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.898109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 [2024-07-10 15:50:42.898241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.898401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.898446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 [2024-07-10 15:50:42.898584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.898721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.898748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 [2024-07-10 15:50:42.898939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.899135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.899161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 [2024-07-10 15:50:42.899311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.899478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.899504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 [2024-07-10 15:50:42.899661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.899873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.899909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 [2024-07-10 15:50:42.900040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.900181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.900208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 [2024-07-10 15:50:42.900370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.900516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.900546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 [2024-07-10 15:50:42.900680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.900822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.900848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 [2024-07-10 15:50:42.901030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.901182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.901209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 [2024-07-10 15:50:42.901410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.901608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.901638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 [2024-07-10 15:50:42.901828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.902017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.902059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 [2024-07-10 15:50:42.902242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.902400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2234089 Killed "${NVMF_APP[@]}" "$@" 00:27:03.716 [2024-07-10 15:50:42.902436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 [2024-07-10 15:50:42.902567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.902695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.902720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 15:50:42 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 15:50:42 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:03.716 [2024-07-10 15:50:42.902914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 15:50:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:03.716 [2024-07-10 15:50:42.903086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.903115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 15:50:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:03.716 15:50:42 -- common/autotest_common.sh@10 -- # set +x 00:27:03.716 [2024-07-10 15:50:42.903318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.903459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.903485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 [2024-07-10 15:50:42.903618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.903758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.903805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 [2024-07-10 15:50:42.903983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.904162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.904191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 [2024-07-10 15:50:42.904378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.904550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.904579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.716 qpair failed and we were unable to recover it. 00:27:03.716 [2024-07-10 15:50:42.904741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.716 [2024-07-10 15:50:42.904941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.904970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.717 qpair failed and we were unable to recover it. 00:27:03.717 [2024-07-10 15:50:42.905137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.905313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.905341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.717 qpair failed and we were unable to recover it. 00:27:03.717 [2024-07-10 15:50:42.905519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.905685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.905714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.717 qpair failed and we were unable to recover it. 00:27:03.717 [2024-07-10 15:50:42.905901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.906093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.906136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.717 qpair failed and we were unable to recover it. 00:27:03.717 [2024-07-10 15:50:42.906285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.906481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.906510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.717 qpair failed and we were unable to recover it. 00:27:03.717 15:50:42 -- nvmf/common.sh@469 -- # nvmfpid=2234751 00:27:03.717 15:50:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:03.717 [2024-07-10 15:50:42.906692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 15:50:42 -- nvmf/common.sh@470 -- # waitforlisten 2234751 00:27:03.717 [2024-07-10 15:50:42.906851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.906879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.717 qpair failed and we were unable to recover it. 00:27:03.717 15:50:42 -- common/autotest_common.sh@819 -- # '[' -z 2234751 ']' 00:27:03.717 [2024-07-10 15:50:42.907055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 15:50:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.717 [2024-07-10 15:50:42.907185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.907230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b9 15:50:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:03.717 0 with addr=10.0.0.2, port=4420 00:27:03.717 qpair failed and we were unable to recover it. 00:27:03.717 15:50:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.717 [2024-07-10 15:50:42.907409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 15:50:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:03.717 [2024-07-10 15:50:42.907568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.907597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b9 15:50:42 -- common/autotest_common.sh@10 -- # set +x 00:27:03.717 0 with addr=10.0.0.2, port=4420 00:27:03.717 qpair failed and we were unable to recover it. 00:27:03.717 [2024-07-10 15:50:42.907776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.907957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.907986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.717 qpair failed and we were unable to recover it. 00:27:03.717 [2024-07-10 15:50:42.908169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.908307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.908333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.717 qpair failed and we were unable to recover it. 00:27:03.717 [2024-07-10 15:50:42.908540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.908693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.908722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.717 qpair failed and we were unable to recover it. 00:27:03.717 [2024-07-10 15:50:42.908926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.909086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.909110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.717 qpair failed and we were unable to recover it. 00:27:03.717 [2024-07-10 15:50:42.909252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.909380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.909407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.717 qpair failed and we were unable to recover it. 00:27:03.717 [2024-07-10 15:50:42.909548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.909690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.909714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.717 qpair failed and we were unable to recover it. 00:27:03.717 [2024-07-10 15:50:42.909850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.910023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.910051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.717 qpair failed and we were unable to recover it. 00:27:03.717 [2024-07-10 15:50:42.910263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.910445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.910475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.717 qpair failed and we were unable to recover it. 00:27:03.717 [2024-07-10 15:50:42.910620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.910758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.910783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.717 qpair failed and we were unable to recover it. 00:27:03.717 [2024-07-10 15:50:42.910915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.911098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.911127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.717 qpair failed and we were unable to recover it. 00:27:03.717 [2024-07-10 15:50:42.911315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.911483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.717 [2024-07-10 15:50:42.911510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.717 qpair failed and we were unable to recover it. 00:27:03.717 [2024-07-10 15:50:42.911655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.911858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.911888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.912074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.912258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.912283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.912430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.912575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.912601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.912774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.913002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.913033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.913188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.913380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.913406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.913582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.913739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.913769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.913978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.914190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.914223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.914410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.914584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.914609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.914763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.914934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.914962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.915143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.915278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.915305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.915454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.915599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.915625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.915817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.915978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.916024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.916208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.916338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.916363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.916527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.916684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.916730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.916932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.917134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.917162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.917345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.917510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.917536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.917680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.917808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.917833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.918030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.918182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.918207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.918369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.918512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.918539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.918683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.918915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.918942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.919125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.919312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.919337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.718 qpair failed and we were unable to recover it. 00:27:03.718 [2024-07-10 15:50:42.919503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.718 [2024-07-10 15:50:42.919653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.919679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.919892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.920051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.920084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.920273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.920439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.920465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.920609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.920813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.920841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.921016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.921198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.921225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.921364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.921537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.921563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.921715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.921900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.921928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.922106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.922271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.922298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.922506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.922646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.922672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.922833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.922993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.923018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.923191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.923331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.923357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.923506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.923638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.923663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.923825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.923991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.924018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.924182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.924318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.924343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.924496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.924634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.924659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.924829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.924994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.925019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.925166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.925332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.925358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.925516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.925647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.925672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.925810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.925974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.925999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.926159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.926321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.926347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.926513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.926679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.926704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.926868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.927027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.927052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.927190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.927355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.927384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.719 [2024-07-10 15:50:42.927557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.927715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.719 [2024-07-10 15:50:42.927741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.719 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.927873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.928032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.928057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.928226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.928378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.928404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.928560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.928697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.928724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.928897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.929058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.929083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.929259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.929416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.929459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.929610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.929778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.929804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.929945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.930105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.930130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.930275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.930437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.930463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.930624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.930806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.930831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.930963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.931110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.931139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.931311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.931453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.931481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.931650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.931787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.931813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.932004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.932140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.932168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.932303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.932470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.932496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.932657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.932794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.932819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.932977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.933120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.933144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.933308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.933495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.933521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.933658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.933814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.933840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.934003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.934163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.934190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.934331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.934521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.934547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.934682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.934873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.934898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.720 [2024-07-10 15:50:42.935035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.935196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.720 [2024-07-10 15:50:42.935222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.720 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.935397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.935566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.935592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.935748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.935910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.935938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.936075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.936231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.936257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.936423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.936615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.936640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.936797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.936928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.936952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.937105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.937267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.937293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.937458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.937597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.937622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.937789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.937928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.937954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.938147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.938338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.938363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.938509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.938669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.938695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.938842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.938973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.939000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.939139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.939330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.939355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.939514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.939686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.939712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.939897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.940031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.940055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.940213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.940388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.940416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.940577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.940750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.940776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.940913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.941047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.941070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.941239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.941404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.941434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.941597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.941742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.941766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.941930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.942065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.942088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.942251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.942392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.942418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.942574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.942737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.942761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.942897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.943038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.943063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.943194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.943326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.721 [2024-07-10 15:50:42.943350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.721 qpair failed and we were unable to recover it. 00:27:03.721 [2024-07-10 15:50:42.943514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.943676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.943700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.943840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.944002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.944027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.944191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.944327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.944352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.944495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.944683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.944707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.944880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.945074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.945099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.945282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.945417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.945446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.945579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.945747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.945772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.945929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.946111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.946136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.946300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.946455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.946480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.946642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.946796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.946820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.946979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.947177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.947202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.947362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.947519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.947543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.947709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.947898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.947922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.948052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.948242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.948267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.948404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.948569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.948593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.948759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.948888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.948912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.949069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.949227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.949256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.949418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.949556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.949581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.949742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.949879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.949903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.950043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.950226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.950251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.950415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.950554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.950578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.950596] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:03.722 [2024-07-10 15:50:42.950681] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.722 [2024-07-10 15:50:42.950742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.950904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.950928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.951100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.951252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.951277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.951408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.951600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.951626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.951791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.951925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.951950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.952113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.952301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.952326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.722 qpair failed and we were unable to recover it. 00:27:03.722 [2024-07-10 15:50:42.952497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.722 [2024-07-10 15:50:42.952656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.952681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.952817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.953007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.953031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.953191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.953375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.953399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.953552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.953680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.953705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.953847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.954010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.954035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.954172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.954301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.954326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.954489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.954642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.954667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.954813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.954946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.954971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.955102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.955236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.955263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.955444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.955609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.955634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.955768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.955933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.955958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.956124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.956249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.956274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.956439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.956592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.956617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.956796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.956960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.956985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.957119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.957279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.957304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.957442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.957581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.957606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.957745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.957889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.957914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.958100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.958232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.958256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.958388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.958526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.958551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.958680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.958832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.958857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.959021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.959152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.959176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.959363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.959498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.723 [2024-07-10 15:50:42.959526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.723 qpair failed and we were unable to recover it. 00:27:03.723 [2024-07-10 15:50:42.959690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.959849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.959874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.960059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.960208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.960232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.960365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.960503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.960528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.960666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.960824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.960848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.960991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.961124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.961149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.961307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.961467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.961492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.961624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.961816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.961840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.962009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.962141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.962167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.962325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.962462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.962491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.962654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.962823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.962848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.962994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.963125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.963150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.963310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.963478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.963504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.963647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.963812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.963837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.963976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.964109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.964133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.964297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.964481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.964506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.964641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.964776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.964801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.964984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.965122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.965149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.965290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.965460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.965486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.965621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.965812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.965840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.966004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.966163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.966187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.966322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.966471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.966496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.966628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.966786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.966811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.966968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.967127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.967152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.724 qpair failed and we were unable to recover it. 00:27:03.724 [2024-07-10 15:50:42.967313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.967493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.724 [2024-07-10 15:50:42.967519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.967684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.967846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.967870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.968029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.968192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.968217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.968372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.968516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.968541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.968684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.968846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.968870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.969032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.969218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.969242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.969406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.969586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.969612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.969756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.969943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.969968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.970090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.970216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.970240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.970412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.970565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.970589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.970747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.970873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.970898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.971040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.971179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.971204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.971363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.971499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.971524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.971714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.971843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.971868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.972009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.972149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.972173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.972302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.972495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.972522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.972677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.972812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.972838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.972968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.973104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.973128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.973314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.973461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.973486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.973649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.973790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.973815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.974000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.974160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.974185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.974347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.974499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.974525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.725 [2024-07-10 15:50:42.974665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.974824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.725 [2024-07-10 15:50:42.974849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.725 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.974988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.975125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.975150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.975313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.975466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.975492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.975656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.975790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.975815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.975995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.976157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.976182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.976369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.976498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.976523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.976690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.976848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.976873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.977013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.977199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.977223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.977406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.977569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.977594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.977731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.977891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.977917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.978056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.978311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.978336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.978499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.978634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.978660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.978819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.979003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.979027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.979168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.979326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.979350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.979488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.979618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.979643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.979778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.979937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.979963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.980152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.980313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.980338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.980501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.980636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.980661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.980797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.980954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.980978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.981109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.981276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.981300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.981466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.981598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.981622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.981780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.981937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.981961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.982119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.982251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.726 [2024-07-10 15:50:42.982276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.726 qpair failed and we were unable to recover it. 00:27:03.726 [2024-07-10 15:50:42.982422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.982565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.982591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.982756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.982918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.982947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.983134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.983272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.983297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.983459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.983594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.983618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.983762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.983981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.984004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.984173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.984334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.984359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.984546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.984677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.984702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.984840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.984978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.985004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.985192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.985375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.985400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.985571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.985733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.985758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.985896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.986029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.986055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.986216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.986339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.986364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.986514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.986698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.986723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.986865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.986995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.987019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.987183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.987367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.987391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.987559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.987717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.987741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.987901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.988064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.988089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.988218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.988403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.988433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.988561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.988716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.988740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.988880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.989037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.989063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.989248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.989388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.989413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.989583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.989735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.989760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.989925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.990113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.990138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.727 qpair failed and we were unable to recover it. 00:27:03.727 [2024-07-10 15:50:42.990270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.990413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.727 [2024-07-10 15:50:42.990444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.990583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.990742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.990766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.990930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.728 [2024-07-10 15:50:42.991087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.991112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.991271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.991432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.991457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.991594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.991785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.991809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.991977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.992136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.992161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.992326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.992489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.992514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.992644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.992768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.992792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.992955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.993114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.993138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.993277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.993456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.993481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.993640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.993799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.993823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.993992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.994146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.994171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.994330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.994515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.994540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.994670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.994830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.994855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.994995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.995154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.995178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.995344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.995492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.995518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.995683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.995845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.995869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.996060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.996246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.996270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.996460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.996599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.996623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.996783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.996915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.996939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.997105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.997297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.997322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.997464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.997648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.997672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.997856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.997995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.998019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.998209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.998339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.998363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.998500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.998636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.728 [2024-07-10 15:50:42.998660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.728 qpair failed and we were unable to recover it. 00:27:03.728 [2024-07-10 15:50:42.998842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:42.998969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:42.998993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-07-10 15:50:42.999157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:42.999344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:42.999369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-07-10 15:50:42.999514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:42.999680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:42.999704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-07-10 15:50:42.999855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.000014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.000038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-07-10 15:50:43.000206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.000381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.000406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-07-10 15:50:43.000557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.000693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.000717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-07-10 15:50:43.000853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.001009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.001033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-07-10 15:50:43.001199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.001359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.001386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-07-10 15:50:43.001577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.001731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.001755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-07-10 15:50:43.001921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.002055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.002079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-07-10 15:50:43.002245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.002406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.002436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-07-10 15:50:43.002579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.002742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.002767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-07-10 15:50:43.002953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.003110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.003135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-07-10 15:50:43.003323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.003458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.003484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-07-10 15:50:43.003635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.003797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.003825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-07-10 15:50:43.003986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.004172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.004197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-07-10 15:50:43.004357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.004524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.004557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-07-10 15:50:43.004724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.004884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.004908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-07-10 15:50:43.005067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.005227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.005252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-07-10 15:50:43.005433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.005614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.005641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.729 qpair failed and we were unable to recover it. 00:27:03.729 [2024-07-10 15:50:43.005810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.729 [2024-07-10 15:50:43.005996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.006020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.006183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.006316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.006340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.006505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.006703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.006728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.006890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.007050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.007074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.007232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.007388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.007413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.007616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.007755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.007780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.007916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.008067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.008091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.008256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.008415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.008444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.008587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.008722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.008746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.008900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.009059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.009083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.009214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.009372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.009397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.009548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.009713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.009738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.009893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.010057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.010081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.010221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.010357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.010381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.010530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.010695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.010719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.010883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.011050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.011074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.011211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.011344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.011370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.011544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.011707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.011732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.011895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.012048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.012073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.012196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.012332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.012356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.012520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.012655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.012681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.012869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.013026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.013051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.013236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.013393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.013417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.013617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.013804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.013829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.013964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.014099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.014123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.014297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.014433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.014458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.014596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.014727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.014751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.730 qpair failed and we were unable to recover it. 00:27:03.730 [2024-07-10 15:50:43.014938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.015093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.730 [2024-07-10 15:50:43.015117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.015274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.015399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.015431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.015628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.015768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.015793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.015954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.016075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.016100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.016227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.016384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.016409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.016611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.016801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.016827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.017010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.017135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.017160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.017324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.017486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.017512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.017679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.017869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.017895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.018082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.018217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.018243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.018407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.018547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.018572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.018735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.018899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.018924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.019071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.019238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.019262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.019397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.019533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.019558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.019755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.019930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.019954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.020113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.020248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.020272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.020460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.020632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.020656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.020843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.020971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.020996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.021130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.021309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.021338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.021483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.021649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.021674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.021810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.021970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.021995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.022128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.022314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.022338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.022470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.022635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.022659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.022823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.022978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.023002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.023165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.023347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.023371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.731 qpair failed and we were unable to recover it. 00:27:03.731 [2024-07-10 15:50:43.023555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.023710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.731 [2024-07-10 15:50:43.023735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.023906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.024038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.024062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.024218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.024365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.024389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.024520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.024686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.024710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.024844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.025006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.025030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.025183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.025336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.025360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.025523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.025656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.025681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.025841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.026001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.026025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.026185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.026358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.026383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.026567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.026690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.026715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.026873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.027030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.027054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.027161] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:03.732 [2024-07-10 15:50:43.027188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.027317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.027341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.027534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.027703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.027727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.027892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.028047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.028071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.028204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.028388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.028412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.028595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.028736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.028760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.028916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.029066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.029091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.029225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.029384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.029408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.029576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.029701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.029725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.029862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.030027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.030051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.030214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.030345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.030369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.030633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.030818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.030843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.030984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.031147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.031172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.031306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.031462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.031488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.031653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.031839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.031863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.732 qpair failed and we were unable to recover it. 00:27:03.732 [2024-07-10 15:50:43.031995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.732 [2024-07-10 15:50:43.032182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.032206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.032373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.032512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.032537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.032693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.032866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.032892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.033088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.033250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.033274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.033460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.033588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.033612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.033767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.033933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.033958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.034149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.034310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.034334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.034497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.034659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.034684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.034836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.035094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.035119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.035269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.035404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.035433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.035596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.035823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.035848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.035978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.036168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.036192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.036355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.036512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.036537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.036669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.036810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.036835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.037000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.037159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.037183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.037346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.037536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.037561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.037728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.037866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.037891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.038050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.038280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.038304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.038468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.038653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.038678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.038807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.038998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.039023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.039152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.039336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.039360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.039623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.039767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.039792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.040001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.040163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.040189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.040321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.040464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.040489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.040656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.040846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.040870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.041056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.041191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.041217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.733 [2024-07-10 15:50:43.041385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.041551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.733 [2024-07-10 15:50:43.041577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.733 qpair failed and we were unable to recover it. 00:27:03.734 [2024-07-10 15:50:43.041728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.041889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.041914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-07-10 15:50:43.042073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.042211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.042235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-07-10 15:50:43.042374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.042539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.042571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-07-10 15:50:43.042705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.042872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.042896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-07-10 15:50:43.043027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.043158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.043184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-07-10 15:50:43.043323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.043491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.043516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-07-10 15:50:43.043681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.043836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.043860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-07-10 15:50:43.044002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.044160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.044184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-07-10 15:50:43.044370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.044532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.044559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-07-10 15:50:43.044728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.044890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.044914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-07-10 15:50:43.045059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.045245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.045269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-07-10 15:50:43.045413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.045582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.045606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-07-10 15:50:43.045775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.045967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.045995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-07-10 15:50:43.046128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.046260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.046284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-07-10 15:50:43.046421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.046614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.046639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-07-10 15:50:43.046773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.046958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.046983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-07-10 15:50:43.047147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.047298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.047323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-07-10 15:50:43.047483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.047608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.047632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.734 qpair failed and we were unable to recover it. 00:27:03.734 [2024-07-10 15:50:43.047767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.047958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.734 [2024-07-10 15:50:43.047983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.048141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.048294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.048318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.048474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.048641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.048666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.048832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.048996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.049020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.049150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.049337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.049361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.049525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.049675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.049699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.049864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.050018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.050043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.050203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.050386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.050410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.050548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.050709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.050734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.050870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.051039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.051064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.051216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.051377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.051401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.051571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.051726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.051751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.051914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.052079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.052104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.052267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.052400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.052450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.052613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.052751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.052776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.052912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.053039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.053063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.053218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.053378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.053403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.053560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.053695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.053720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.053884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.054048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.054072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.054259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.054399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.054423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.054622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.054777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.054801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.054959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.055118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.055142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.055326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.055469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.055495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.055660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.055822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.055847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.056006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.056164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.056189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.056371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.056533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.056559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.735 [2024-07-10 15:50:43.056684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.056867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.735 [2024-07-10 15:50:43.056891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.735 qpair failed and we were unable to recover it. 00:27:03.736 [2024-07-10 15:50:43.057057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.057245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.057269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-07-10 15:50:43.057457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.057638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.057663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-07-10 15:50:43.057792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.057982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.058006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-07-10 15:50:43.058192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.058379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.058404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-07-10 15:50:43.058586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.058723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.058748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-07-10 15:50:43.058883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.059044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.059075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-07-10 15:50:43.059249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.059419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.059452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-07-10 15:50:43.059629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.059764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.059788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-07-10 15:50:43.059928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.060096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.060121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-07-10 15:50:43.060285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.060465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.060492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-07-10 15:50:43.060644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.060807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.060832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-07-10 15:50:43.061015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.061149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.061174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-07-10 15:50:43.061378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.061544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.061572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-07-10 15:50:43.061707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.061875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.061900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-07-10 15:50:43.062066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.062255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.062282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-07-10 15:50:43.062448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.062611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.062636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-07-10 15:50:43.062828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.062962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.062986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-07-10 15:50:43.063126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.063295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.063325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:03.736 qpair failed and we were unable to recover it. 00:27:03.736 [2024-07-10 15:50:43.063462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.063604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.736 [2024-07-10 15:50:43.063633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.012 qpair failed and we were unable to recover it. 00:27:04.012 [2024-07-10 15:50:43.063794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.063926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.063950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.012 qpair failed and we were unable to recover it. 00:27:04.012 [2024-07-10 15:50:43.064078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.064211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.064236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.012 qpair failed and we were unable to recover it. 00:27:04.012 [2024-07-10 15:50:43.064381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.064516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.064541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.012 qpair failed and we were unable to recover it. 00:27:04.012 [2024-07-10 15:50:43.064726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.064886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.064911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.012 qpair failed and we were unable to recover it. 00:27:04.012 [2024-07-10 15:50:43.065074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.065260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.065284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.012 qpair failed and we were unable to recover it. 00:27:04.012 [2024-07-10 15:50:43.065448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.065614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.065640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.012 qpair failed and we were unable to recover it. 00:27:04.012 [2024-07-10 15:50:43.065775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.065938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.065965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.012 qpair failed and we were unable to recover it. 00:27:04.012 [2024-07-10 15:50:43.066105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.066238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.066263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.012 qpair failed and we were unable to recover it. 00:27:04.012 [2024-07-10 15:50:43.066445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.066612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.066639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.012 qpair failed and we were unable to recover it. 00:27:04.012 [2024-07-10 15:50:43.066770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.066954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.066981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.012 qpair failed and we were unable to recover it. 00:27:04.012 [2024-07-10 15:50:43.067170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.067329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.067360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.012 qpair failed and we were unable to recover it. 00:27:04.012 [2024-07-10 15:50:43.067533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.067722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.067747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.012 qpair failed and we were unable to recover it. 00:27:04.012 [2024-07-10 15:50:43.067912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.068079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.012 [2024-07-10 15:50:43.068110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.068282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.068473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.068500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.068667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.068834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.068858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.068992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.069150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.069175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.069334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.069496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.069521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.069653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.069822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.069847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.069979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.070136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.070161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.070294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.070456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.070481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.070629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.070762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.070787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.070943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.071104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.071128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.071259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.071442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.071467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.071631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.071784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.071809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.071968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.072127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.072152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.072311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.072466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.072492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.072628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.072752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.072776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.072965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.073106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.073130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.073290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.073449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.073475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.073666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.073823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.073848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.074012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.074165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.074190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.074349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.074476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.074502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.074657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.074794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.074819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.075003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.075192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.075217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.075377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.075560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.075585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.075719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.075872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.075897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.076060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.076240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.076265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.013 qpair failed and we were unable to recover it. 00:27:04.013 [2024-07-10 15:50:43.076398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.013 [2024-07-10 15:50:43.076598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.076633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.076803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.076991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.077016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.077177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.077334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.077359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.077543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.077689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.077714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.077862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.077997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.078022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.078165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.078297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.078321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.078479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.078616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.078640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.078806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.078964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.078989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.079177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.079303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.079328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.079490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.079649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.079673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.079836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.079992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.080017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.080181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.080311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.080336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.080460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.080616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.080641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.080802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.080961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.080990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.081128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.081267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.081292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.081465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.081598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.081623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.081790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.081943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.081968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.082121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.082275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.082299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.082434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.082590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.082614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.082768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.082955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.082980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.083165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.083327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.083354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.083548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.083677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.083702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.083843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.084001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.084026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.084186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.084344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.084369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.084513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.084686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.084710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.084853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.084991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.085017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.014 [2024-07-10 15:50:43.085178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.085339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.014 [2024-07-10 15:50:43.085363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.014 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.085515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.085665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.085690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.085880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.086039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.086064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.086198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.086332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.086356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.086517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.086676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.086700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.086890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.087049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.087073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.087208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.087365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.087392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.087551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.087687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.087712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.087904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.088059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.088084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.088251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.088446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.088472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.088607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.088767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.088791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.088960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.089147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.089172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.089305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.089473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.089498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.089655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.089818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.089842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.090010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.090146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.090173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.090311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.090444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.090470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.090639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.090797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.090822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.090982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.091111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.091135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.091273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.091401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.091430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.091598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.091758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.091783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.091946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.092105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.092130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.092261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.092422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.092461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.092625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.092749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.092773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.092937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.093097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.015 [2024-07-10 15:50:43.093121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.015 qpair failed and we were unable to recover it. 00:27:04.015 [2024-07-10 15:50:43.093281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.093414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.093446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.093634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.093757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.093781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.093939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.094098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.094124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.094258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.094421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.094453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.094611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.094803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.094828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.094954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.095086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.095110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.095302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.095458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.095484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.095646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.095779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.095804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.095967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.096158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.096183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.096346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.096478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.096503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.096671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.096857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.096881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.097047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.097204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.097228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.097397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.097562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.097587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.097740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.097902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.097927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.098064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.098238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.098267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.098431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.098590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.098615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.098870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.099035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.099060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.099201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.099338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.099363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.099523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.099662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.099686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.099823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.099988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.100013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.100173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.100361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.100386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.100551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.100695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.100720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.100884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.101098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.101123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.101289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.101458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.101483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.101621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.101753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.101782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.101958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.102096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.102123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.016 qpair failed and we were unable to recover it. 00:27:04.016 [2024-07-10 15:50:43.102351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.016 [2024-07-10 15:50:43.102512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.102537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.017 qpair failed and we were unable to recover it. 00:27:04.017 [2024-07-10 15:50:43.102678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.102808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.102833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.017 qpair failed and we were unable to recover it. 00:27:04.017 [2024-07-10 15:50:43.102967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.103126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.103151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.017 qpair failed and we were unable to recover it. 00:27:04.017 [2024-07-10 15:50:43.103316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.103451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.103477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.017 qpair failed and we were unable to recover it. 00:27:04.017 [2024-07-10 15:50:43.103663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.103795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.103819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.017 qpair failed and we were unable to recover it. 00:27:04.017 [2024-07-10 15:50:43.103979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.104103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.104127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.017 qpair failed and we were unable to recover it. 00:27:04.017 [2024-07-10 15:50:43.104264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.104438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.104465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.017 qpair failed and we were unable to recover it. 00:27:04.017 [2024-07-10 15:50:43.104594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.104783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.104808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.017 qpair failed and we were unable to recover it. 00:27:04.017 [2024-07-10 15:50:43.104971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.105123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.105148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.017 qpair failed and we were unable to recover it. 00:27:04.017 [2024-07-10 15:50:43.105314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.105478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.105506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.017 qpair failed and we were unable to recover it. 00:27:04.017 [2024-07-10 15:50:43.105650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.105835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.105860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.017 qpair failed and we were unable to recover it. 00:27:04.017 [2024-07-10 15:50:43.106019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.106177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.106202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.017 qpair failed and we were unable to recover it. 00:27:04.017 [2024-07-10 15:50:43.106391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.106523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.106549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.017 qpair failed and we were unable to recover it. 00:27:04.017 [2024-07-10 15:50:43.106684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.106842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.106866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.017 qpair failed and we were unable to recover it. 00:27:04.017 [2024-07-10 15:50:43.107031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.107190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.107215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.017 qpair failed and we were unable to recover it. 00:27:04.017 [2024-07-10 15:50:43.107349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.107488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.017 [2024-07-10 15:50:43.107513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.107663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.107821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.107845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.107978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.108160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.108184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.108353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.108489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.108514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.108685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.108825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.108851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.109007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.109164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.109188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.109353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.109541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.109566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.109699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.109861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.109886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.110045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.110202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.110226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.110417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.110549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.110575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.110707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.110843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.110867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.111056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.111183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.111207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.111368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.111500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.111526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.111687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.111881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.111905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.112089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.112221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.112246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.112377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.112544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.112576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.112741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.112895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.112920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.113078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.113243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.113267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.113403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.113565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.113590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.113779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.113914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.113939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.114073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.114249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.114273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.114437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.114579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.114605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.114808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.114960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.114984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.115173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.115305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.115330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.115468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.115598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.115629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.115791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.115917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.115942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.116127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.116286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.116310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.116473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.116606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.116632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.116770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.116960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.116985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.018 [2024-07-10 15:50:43.117150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.117309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.018 [2024-07-10 15:50:43.117334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.018 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.117495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.117655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.117680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.117876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.118013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.118039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.118212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.118381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.118405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.118570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.118734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.118758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.118950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.119109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.119137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.119298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.119478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.119504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.119696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.119825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.119849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.120008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.120163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.120187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.120374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.120536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.120562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.120748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.120886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.120910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.121042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.121177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.121202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.121363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.121529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.121555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.121691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.121823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.121848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.122012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.122195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.122219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.122384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.122519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.122544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.122717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.122880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.122904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.123087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.123243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.123268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.123396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.123561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.123586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.123723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.123886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.123910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.124065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.124229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.124253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.124415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.124554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.124578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.124744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.124880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.124905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.125070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.125259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.125283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.125454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.125585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.125609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.125768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.125891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.125916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.126083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.126235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.126260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.126431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.126592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.126617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.126806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.126964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.126989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.127175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.127334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.127359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.019 [2024-07-10 15:50:43.127520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.127675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.019 [2024-07-10 15:50:43.127699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.019 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.127836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.127963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.127988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.128175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.128312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.128336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.128469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.128652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.128677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.128836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.128988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.129012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.129169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.129293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.129318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.129449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.129618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.129642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.129802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.129957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.129981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.130137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.130274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.130299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.130439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.130600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.130625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.130757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.130884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.130908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.131037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.131188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.131212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.131402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.131530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.131555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.131686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.131875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.131900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.132059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.132244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.132269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.132431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.132571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.132597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.132762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.132929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.132954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.133112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.133264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.133288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.133415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.133584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.133609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.133785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.133944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.133969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.134104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.134268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.134293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.134421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.134608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.134632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.134790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.134974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.134999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.135160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.135325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.135350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.135477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.135602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.135626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.135819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.135976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.136000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.136167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.136320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.136348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.136514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.136670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.136695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.136884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.137041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.137066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.137250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.137380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.137406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.137575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.137701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.137726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.137915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.138070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.020 [2024-07-10 15:50:43.138096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.020 qpair failed and we were unable to recover it. 00:27:04.020 [2024-07-10 15:50:43.138276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.138405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.138436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.138570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.138751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.138775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.138936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.139120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.139144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.139332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.139470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.139496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.139660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.139781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.139806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.139969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.140136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.140160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.140294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.140432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.140457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.140617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.140777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.140801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.140967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.141125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.141150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.141311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.141492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.141517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.141659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.141825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.141850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.142017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.142180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.142206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.142361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.142523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.142549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.142714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.142871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.142895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.143060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.143246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.143270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.143403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.143572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.143597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.143761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.143920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.143945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.144088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.144249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.144274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.144412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.144560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.144585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.144768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.144902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.144929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.145070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.145228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.145252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.145411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.145557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.145582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.145741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.145900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.145924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.146082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.146217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.146242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.146404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.146569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.146594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.146783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.146962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.146990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.147140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.147312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.147338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.147536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.147697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.147723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.147859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.148021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.148046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.148191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.148319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.148345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.021 qpair failed and we were unable to recover it. 00:27:04.021 [2024-07-10 15:50:43.148482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.148479] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:04.021 [2024-07-10 15:50:43.148613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.021 [2024-07-10 15:50:43.148622] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:04.021 [2024-07-10 15:50:43.148638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.022 [2024-07-10 15:50:43.148642] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.148656] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:04.022 [2024-07-10 15:50:43.148714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:27:04.022 [2024-07-10 15:50:43.148770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.148767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:27:04.022 [2024-07-10 15:50:43.148916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.148940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.148824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:27:04.022 [2024-07-10 15:50:43.148827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:27:04.022 [2024-07-10 15:50:43.149106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.149239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.149264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.149407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.149551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.149577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.149779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.150008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.150033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.150167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.150306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.150332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.150477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.150646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.150671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.150835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.150965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.150991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.151138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.151278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.151304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.151439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.151572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.151597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.151770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.151896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.151921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.152087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.152252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.152277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.152421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.152753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.152778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.152969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.153149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.153179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.153324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.153467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.153493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.153628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.153788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.153813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.153974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.154106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.154131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.154258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.154418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.154448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.154588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.154721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.154746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.154902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.155034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.155059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.155220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.155381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.155405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.155570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.155695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.155719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.155884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.156013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.156038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.156191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.156332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.156357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.156524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.156676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.156701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.022 qpair failed and we were unable to recover it. 00:27:04.022 [2024-07-10 15:50:43.156841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.022 [2024-07-10 15:50:43.156969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.156994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.157133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.157268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.157293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.157438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.157577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.157601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.157777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.157912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.157937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.158077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.158288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.158313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.158475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.158662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.158687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.158818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.159053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.159077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.159241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.159370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.159394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.159562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.159750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.159780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.159917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.160073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.160098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.160241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.160369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.160394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.160547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.160687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.160715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.160844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.160984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.161009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.161152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.161314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.161339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.161473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.161615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.161639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.161791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.161914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.161938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.162182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.162321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.162346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.162485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.162629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.162654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.162819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.162979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.163005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.163140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.163304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.163328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.163559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.163716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.163741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.163868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.163995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.164020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.164191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.164326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.164351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.164490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.164619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.164644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.164770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.164920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.164945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.165175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.165317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.165342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.165482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.165616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.165641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.165781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.165921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.165949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.166101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.166262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.166286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.166464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.166596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.166621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.166762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.166890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.166915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.023 qpair failed and we were unable to recover it. 00:27:04.023 [2024-07-10 15:50:43.167045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.167184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.023 [2024-07-10 15:50:43.167209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.167353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.167493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.167519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.167675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.167824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.167849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.167997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.168234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.168258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.168394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.168544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.168569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.168795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.168927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.168954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.169089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.169311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.169336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.169489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.169626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.169651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.169800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.169954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.169979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.170118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.170248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.170273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.170408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.170570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.170595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.170758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.170888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.170913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.171079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.171212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.171237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.171382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.171524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.171551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.171720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.171853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.171879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.172044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.172166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.172190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.172333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.172466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.172491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.172639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.172768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.172794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.172933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.173169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.173194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.173335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.173487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.173513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.173643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.173784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.173809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.173969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.174127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.174151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.174283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.174418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.174449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.174588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.174719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.174744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.174881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.175008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.175032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.175194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.175348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.175372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.175528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.175656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.175683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.175815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.175944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.175969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.176113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.176257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.176292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.176421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.176559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.176584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.176727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.176858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.176883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.024 [2024-07-10 15:50:43.177052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.177213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.024 [2024-07-10 15:50:43.177237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.024 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.177376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.177516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.177541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.177675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.177882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.177907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.178143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.178299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.178324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.178461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.178596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.178622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.178792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.178924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.178950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.179110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.179343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.179368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.179512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.179646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.179675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.179829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.179991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.180017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.180155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.180282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.180307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.180484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.180679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.180704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.180863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.180996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.181022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.181157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.181290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.181314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.181480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.181641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.181666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.181806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.181964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.181989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.182117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.182239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.182264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.182405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.182638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.182663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.182799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.182940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.182964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.183108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.183250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.183274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.183401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.183559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.183584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.183730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.183856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.183881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.184020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.184147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.184172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.184304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.184462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.184487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.184627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.184777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.184801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.184962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.185100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.185127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.185256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.185404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.185435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.185570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.185733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.185757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.185988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.186123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.186148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.186286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.186422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.186453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.186588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.186722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.186748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.186889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.187036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.187061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.025 qpair failed and we were unable to recover it. 00:27:04.025 [2024-07-10 15:50:43.187187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.025 [2024-07-10 15:50:43.187349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.187373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.187544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.187678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.187703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.187839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.187969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.187994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.188156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.188323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.188347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.188482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.188627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.188651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.188811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.188943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.188968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.189141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.189298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.189322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.189477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.189615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.189642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.189787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.189943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.189968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.190102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.190231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.190256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.190416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.190581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.190606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.190752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.190892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.190916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.191080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.191208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.191233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.191395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.191537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.191562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.191701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.191832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.191856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.191982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.192196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.192221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.192382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.192549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.192575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.192734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.192908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.192933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.193065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.193232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.193256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.193388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.193526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.193551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.193701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.193865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.193890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.194019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.194144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.194169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.194329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.194465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.194490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.194632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.194776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.194800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.194934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.195227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.195252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.195379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.195519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.195544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.195677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.195861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.026 [2024-07-10 15:50:43.195886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.026 qpair failed and we were unable to recover it. 00:27:04.026 [2024-07-10 15:50:43.196051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.196187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.196215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.196442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.196586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.196612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.196775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.196931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.196956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.197112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.197243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.197268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.197398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.197544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.197569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.197708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.197845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.197872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.198025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.198161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.198186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.198364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.198497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.198523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.198668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.198805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.198829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.198965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.199094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.199119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.199257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.199408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.199437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.199621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.199751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.199776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.199906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.200036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.200061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.200221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.200350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.200375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.200507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.200639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.200664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.200905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.201042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.201067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.201199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.201345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.201370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.201502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.201638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.201665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.201799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.201935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.201960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.202103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.202259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.202284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.202423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.202560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.202585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.202723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.202851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.202875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.202999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.203158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.203182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.203345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.203505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.203531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.203665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.203822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.203847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.204056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.204194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.204219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.204358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.204492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.204518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.204650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.204783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.204807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.204938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.205080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.205105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.205248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.205376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.205401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.205544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.205679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.027 [2024-07-10 15:50:43.205704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.027 qpair failed and we were unable to recover it. 00:27:04.027 [2024-07-10 15:50:43.205865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.206047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.206078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.206285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.206443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.206472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.206626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.206795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.206825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.206968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.207111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.207142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.207337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.207481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.207512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.207655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.207865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.207891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.208057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.208186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.208217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.208371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.208523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.208554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.208699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.208862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.208889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.209029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.209168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.209194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.209349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.209494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.209526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.209671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.209811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.209838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.210009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.210142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.210169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.210328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.210472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.210501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.210759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.210936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.210965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.211123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.211258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.211283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.211465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.211616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.211642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.211774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.211911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.211936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.212065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.212200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.212226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.212353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.212505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.212532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.212721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.212858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.212884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.213016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.213156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.213183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.213323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.213482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.213508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.213641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.213771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.213796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.213938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.214079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.214106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.214242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.214383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.214411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.214581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.214715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.214740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.028 [2024-07-10 15:50:43.214866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.215025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.028 [2024-07-10 15:50:43.215051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.028 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.215191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.215347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.215372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.215500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.215633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.215658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.215808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.215955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.215983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.216152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.216287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.216312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.216451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.216586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.216612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.216749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.216913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.216940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.217084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.217249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.217274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.217421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.217566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.217592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.217762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.217924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.217953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.218100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.218229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.218256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.218411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.218556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.218582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.218727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.218859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.218884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.219031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.219190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.219221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.219358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.219493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.219519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.219653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.219826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.219851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.219994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.220126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.220151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.220283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.220415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.220446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.220577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.220713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.220737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.220872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.221024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.221048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.221197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.221326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.221350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.221587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.221716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.221740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.221884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.222026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.222051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.222190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.222322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.222351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.222576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.222705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.222730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.222859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.223010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.223035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.223169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.223312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.223336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.223475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.223598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.223623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.223770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.223950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.223974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.224099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.224224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.224248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.224372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.224502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.224527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.029 qpair failed and we were unable to recover it. 00:27:04.029 [2024-07-10 15:50:43.224658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.224797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.029 [2024-07-10 15:50:43.224822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.224959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.225088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.225113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.225249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.225409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.225441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.225596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.225775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.225800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.225932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.226062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.226087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.226222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.226374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.226398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.226547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.226682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.226707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.226848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.227009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.227033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.227165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.227305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.227330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.227481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.227630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.227655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.227799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.227937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.227962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.228091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.228322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.228347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.228492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.228641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.228666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.228815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.228938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.228963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.229106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.229244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.229269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.229413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.229553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.229579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.229714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.229845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.229869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.230002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.230132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.230157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.230280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.230404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.230434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.230585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.230728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.230753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.230893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.231037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.231062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.231221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.231342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.231367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.231503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.231650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.231675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.231820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.231979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.232004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.232141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.232275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.232301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.232445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.232607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.232632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.232765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.232899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.232926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.233063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.233199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.030 [2024-07-10 15:50:43.233223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.030 qpair failed and we were unable to recover it. 00:27:04.030 [2024-07-10 15:50:43.233366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.233496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.233521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.233753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.233898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.233923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.234155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.234308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.234333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.234478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.234613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.234640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.234775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.234934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.234958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.235094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.235225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.235250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.235382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.235524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.235549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.235684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.235846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.235871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.236012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.236142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.236167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.236308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.236461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.236487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.236630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.236765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.236790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.236935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.237066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.237091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.237224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.237359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.237384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.237540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.237667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.237692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.237833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.237979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.238004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.238154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.238290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.238318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.238461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.238599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.238623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.238791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.238930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.238955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.239083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.239221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.239245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.239412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.239542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.239567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.239703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.239832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.239857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.240011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.240165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.240190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.240344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.240496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.240522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.240687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.240813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.240838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.240981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.241145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.241169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.241291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.241447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.241472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.241651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.241787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.241812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.241975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.242104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.242130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.242288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.242440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.242465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.242625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.242779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.031 [2024-07-10 15:50:43.242804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.031 qpair failed and we were unable to recover it. 00:27:04.031 [2024-07-10 15:50:43.242926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.243088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.243112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.243247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.243385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.243409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.243545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.243666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.243691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.243858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.244014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.244039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.244175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.244298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.244322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.244490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.244643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.244668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.244812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.244939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.244964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.245091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.245242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.245267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.245435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.245559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.245584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.245715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.245841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.245866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.246104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.246329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.246353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.246517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.246680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.246705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.246842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.247007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.247032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.247201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.247334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.247359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.247492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.247702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.247727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.247859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.247991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.248016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.248178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.248328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.248352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.248488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.248641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.248666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.248808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.248972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.248996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.249172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.249303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.249327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.249477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.249612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.249637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.249767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.249924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.249949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.250132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.250265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.250289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.250432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.250567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.250591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.250738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.250906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.250930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.251070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.251251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.251276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.251436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.251569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.251594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.251725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.251885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.251909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.252049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.252189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.252215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.252358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.252530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.252555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.252715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.252836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.032 [2024-07-10 15:50:43.252860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.032 qpair failed and we were unable to recover it. 00:27:04.032 [2024-07-10 15:50:43.253018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.253179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.253203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.253339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.253465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.253491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.253649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.253820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.253844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.254007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.254136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.254160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.254322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.254476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.254501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.254638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.254781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.254809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.254942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.255124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.255148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.255278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.255431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.255456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.255610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.255744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.255769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.255933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.256087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.256112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.256273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.256407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.256444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.256576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.256711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.256735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.256887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.257072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.257097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.257230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.257352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.257376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.257512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.257702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.257727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.257869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.258024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.258049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.258215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.258374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.258398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.258547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.258680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.258705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.258866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.259027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.259051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.259192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.259363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.259387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.259527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.259661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.259687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.259854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.259985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.260010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.260141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.260296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.260321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.260479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.260619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.260644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.260800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.260957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.260981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.261146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.261305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.261330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.261508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.261666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.261691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.261828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.261954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.261979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.262120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.262249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.262273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.262409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.262544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.262569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.262715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.262843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.033 [2024-07-10 15:50:43.262868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.033 qpair failed and we were unable to recover it. 00:27:04.033 [2024-07-10 15:50:43.263028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.263199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.263224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.263359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.263529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.263554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.263693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.263852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.263877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.264010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.264144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.264169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.264293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.264454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.264479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.264640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.264812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.264836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.264975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.265133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.265158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.265290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.265452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.265478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.265633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.265762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.265786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.265933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.266084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.266108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.266267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.266430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.266455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.266584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.266711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.266736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.266904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.267065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.267090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.267269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.267396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.267421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.267602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.267760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.267785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.267961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.268118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.268143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.268298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.268471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.268497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.268661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.268823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.268848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.268988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.269147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.269172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.269328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.269471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.269496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.269623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.269754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.269779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.269909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.270071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.270096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.270239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.270361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.270385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.034 qpair failed and we were unable to recover it. 00:27:04.034 [2024-07-10 15:50:43.270524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.270657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.034 [2024-07-10 15:50:43.270682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-07-10 15:50:43.270868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.271000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.271025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-07-10 15:50:43.271179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.271300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.271329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-07-10 15:50:43.271502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.271626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.271651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-07-10 15:50:43.271819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.271972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.271997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-07-10 15:50:43.272157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.272285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.272309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-07-10 15:50:43.272450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.272605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.272631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-07-10 15:50:43.272778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.272936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.272961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-07-10 15:50:43.273097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.273227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.273252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-07-10 15:50:43.273404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.273548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.273573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-07-10 15:50:43.273726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.273868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.273893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-07-10 15:50:43.274062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.274216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.274241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-07-10 15:50:43.274378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.274542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.274571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-07-10 15:50:43.274718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.274860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.274885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-07-10 15:50:43.275009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.275191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.275215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-07-10 15:50:43.275377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.275536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.275562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-07-10 15:50:43.275704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.275841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.275866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.035 qpair failed and we were unable to recover it. 00:27:04.035 [2024-07-10 15:50:43.276003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.035 [2024-07-10 15:50:43.276129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.276154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.276296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.276453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.276481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.276609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.276730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.276756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.276895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.277017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.277042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.277188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.277359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.277383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.277537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.277661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.277686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.277848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.278022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.278047] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.278197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.278349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.278373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.278513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.278654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.278680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.278808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.278937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.278961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.279093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.279222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.279247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.279392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.279555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.279580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.279711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.279851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.279876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.280024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.280151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.280175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.280307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.280443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.280469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.280603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.280729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.280753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.280887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.281024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.281049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.281190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.281326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.281352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.281512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.281673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.281698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.281851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.281978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.282002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.282133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.282299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.282324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.036 qpair failed and we were unable to recover it. 00:27:04.036 [2024-07-10 15:50:43.282499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.282629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.036 [2024-07-10 15:50:43.282654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.282807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.282959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.282984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.283133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.283265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.283290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.283453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.283608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.283633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.283760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.283893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.283918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.284064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.284222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.284247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.284403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.284561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.284589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.284732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.284872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.284897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.285050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.285188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.285213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.285347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.285501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.285527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.285656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.285820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.285844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.285984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.286109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.286135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.286306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.286442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.286473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.286603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.286758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.286782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.286909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.287064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.287089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.287280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.287409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.287439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.287565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.287690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.287714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.287855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.288011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.288036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.288167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.288298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.288323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.288446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.288581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.288606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.288756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.288916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.288941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.289087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.289274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.289299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.289447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.289586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.289611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.289736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.289885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.289910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.290037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.290227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.290252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.290394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.290559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.290590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.290753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.290886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.037 [2024-07-10 15:50:43.290911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.037 qpair failed and we were unable to recover it. 00:27:04.037 [2024-07-10 15:50:43.291054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.291184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.291209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.291369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.291509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.291535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.291662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.291792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.291817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.292007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.292178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.292202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.292339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.292472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.292498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.292629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.292813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.292838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.292969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.293152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.293176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.293303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.293423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.293454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.293575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.293714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.293738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.293876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.294034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.294059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.294209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.294334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.294358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.294498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.294628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.294652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.294817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.295005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.295031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.295165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.295321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.295346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.295510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.295654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.295679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.295810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.295937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.295961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.296127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.296277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.296302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.296461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.296611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.296635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.296796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.296951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.296976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.297115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.297253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.297278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.297464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.297591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.297617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.297764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.297900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.297925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.298087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.298230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.298255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.298411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.298569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.298594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.298766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.298888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.298912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.299036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.299193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.299217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.299346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.299493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.299519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.299668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.299826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.299851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.299988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.300138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.300163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.300310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.300479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.300505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.038 [2024-07-10 15:50:43.300666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.300796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.038 [2024-07-10 15:50:43.300823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.038 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.300952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.301089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.301113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.301248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.301400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.301430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.301577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.301704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.301729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.301858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.302015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.302040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.302170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.302332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.302357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.302498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.302639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.302664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.302824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.302951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.302976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.303102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.303238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.303263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.303453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.303593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.303618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.303774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.303904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.303928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.304057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.304208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.304232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.304397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.304525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.304550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.304706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.304834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.304859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.305062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.305193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.305217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.305404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.305535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.305560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.305706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.305863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.305887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.306009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.306126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.306150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.306290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.306462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.306487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.306613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.306765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.306794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.306933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.307066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.307091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.307234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.307357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.307382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.307538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.307672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.307697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.307841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.307984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.308011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.308152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.308280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.308305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.308494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.308626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.308651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.308795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.308928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.308953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.309077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.309202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.309227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.309383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.309528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.309553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.309677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.309839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.309863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.310031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.310189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.310214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.039 [2024-07-10 15:50:43.310347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.310488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.039 [2024-07-10 15:50:43.310514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.039 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.310674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.310840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.310864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.311000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.311193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.311218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.311353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.311512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.311537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.311670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.311830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.311856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.311990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.312150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.312174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.312300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.312437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.312463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.312597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.312763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.312787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.312920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.313081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.313106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.313259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.313403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.313432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.313561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.313708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.313732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.313878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.314002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.314027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.314156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.314317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.314342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.314508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.314671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.314695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.314831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.314976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.315001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.315185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.315313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.315337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.315469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.315604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.315629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.315759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.315913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.315937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.316093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.316221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.316246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.316379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.316532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.316558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.316696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.316852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.316877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.317019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.317149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.317173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.317360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.317487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.317512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.317681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.317841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.317866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.318021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.318177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.318201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.318342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.318469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.318494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.040 qpair failed and we were unable to recover it. 00:27:04.040 [2024-07-10 15:50:43.318625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.318764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.040 [2024-07-10 15:50:43.318788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.318947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.319101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.319126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.319260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.319418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.319447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.319595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.319785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.319810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.319946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.320076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.320101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.320233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.320360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.320384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.320569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.320712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.320737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.320900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.321035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.321061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.321216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.321372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.321399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.321561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.321689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.321716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.321874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.322005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.322030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.322183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.322315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.322340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.322479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.322612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.322638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.322765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.322899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.322930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.323065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.323247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.323272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.323406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.323547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.323571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.323726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.323886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.323912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.324040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.324183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.324208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.324367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.324493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.324519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.324651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.324777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.324803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.324942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.325105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.325132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.325296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.325454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.325488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.325632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.325763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.325788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.325925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.326055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.326080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.326214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.326370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.326394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.326529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.326651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.326675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.326812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.326943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.326968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.327160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.327303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.327327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.327464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.327597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.327622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.041 [2024-07-10 15:50:43.327756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.327910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.041 [2024-07-10 15:50:43.327939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.041 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.328102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.328258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.328282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.328422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.328559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.328583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.328724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.328883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.328907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.329036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.329187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.329211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.329355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.329492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.329517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.329675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.329806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.329830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.329988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.330146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.330171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.330300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.330452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.330477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.330615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.330746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.330771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.330930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.331090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.331115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.331243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.331397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.331422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.331553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.331686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.331710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.331866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.332011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.332035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.332193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.332314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.332338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.332474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.332629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.332654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.332814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.332971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.332995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.333128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.333257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.333282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.333444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.333606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.333631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.333795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.333955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.333980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.334118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.334239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.334264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.334390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.334567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.334591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.042 qpair failed and we were unable to recover it. 00:27:04.042 [2024-07-10 15:50:43.334726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.042 [2024-07-10 15:50:43.334863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.334888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.335018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.335180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.335207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.335332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.335473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.335498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.335690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.335838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.335863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.335990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.336134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.336158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.336288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.336414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.336444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.336590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.336718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.336743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.336915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.337101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.337126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.337282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.337448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.337477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.337609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.337758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.337783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.337953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.338094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.338118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.338276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.338446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.338479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.338619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.338758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.338782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.338908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.339071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.339100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.339231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.339390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.339415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.339569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.339712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.339736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.339892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.340048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.340073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.340193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.340320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.340344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.340512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.340645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.340669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.340831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.340997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.341022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.341154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.341314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.043 [2024-07-10 15:50:43.341339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.043 qpair failed and we were unable to recover it. 00:27:04.043 [2024-07-10 15:50:43.341478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.341602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.341627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.341753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.341915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.341941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.342079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.342209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.342238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.342419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.342574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.342599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.342721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.342877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.342901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.343062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.343220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.343244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.343368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.343496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.343522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.343653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.343839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.343864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.344008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.344181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.344205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.344352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.344510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.344536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.344671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.344813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.344838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.345028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.345149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.345173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.345363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.345490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.345516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.345672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.345801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.345826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.345995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.346151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.346176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.346324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.346452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.346477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.346635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.346810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.346834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.346987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.347113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.347138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.347275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.347434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.347459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.347591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.347759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.347783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.347940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.348088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.348124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.044 qpair failed and we were unable to recover it. 00:27:04.044 [2024-07-10 15:50:43.348285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.044 [2024-07-10 15:50:43.348414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.348446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.348591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.348719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.348745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.348910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.349070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.349095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.349231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.349391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.349417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.349590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.349732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.349757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.349885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.350007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.350032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.350160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.350313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.350338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.350471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.350599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.350624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.350807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.350933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.350959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.351083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.351212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.351237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.351373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.351511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.351538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.351679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.351815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.351840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.351996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.352186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.352211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.352334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.352457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.352492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.352625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.352790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.352814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.352971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.353111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.353136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.353266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.353391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.353416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.353570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.353729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.353754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.353890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.354029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.354054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.354189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.354373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.354397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.354545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.354674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.045 [2024-07-10 15:50:43.354698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.045 qpair failed and we were unable to recover it. 00:27:04.045 [2024-07-10 15:50:43.354830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.354962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.354986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-07-10 15:50:43.355113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.355243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.355267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-07-10 15:50:43.355398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.355580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.355606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-07-10 15:50:43.355745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.355901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.355926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-07-10 15:50:43.356063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.356190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.356215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-07-10 15:50:43.356375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.356504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.356530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-07-10 15:50:43.356662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.356788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.356813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-07-10 15:50:43.356969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.357104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.357128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-07-10 15:50:43.357268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.357416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.357446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-07-10 15:50:43.357579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.357738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.357764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-07-10 15:50:43.357900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.358058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.358082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-07-10 15:50:43.358208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.358339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.358372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-07-10 15:50:43.358538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.358681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.358706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-07-10 15:50:43.358845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.358976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.359001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-07-10 15:50:43.359124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.359261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.359286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.046 [2024-07-10 15:50:43.359433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.359566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.046 [2024-07-10 15:50:43.359591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.046 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.359712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.359869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.359894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.360023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.360176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.360200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.360334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.360476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.360502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.360632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.360816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.360840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.360963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.361102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.361127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.361255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.361398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.361432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.361593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.361716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.361741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.361903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.362074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.362098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.362245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.362372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.362397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.362539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.362688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.362713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.362856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.362979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.363003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.363152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.363280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.363305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.363435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.363599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.363624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.363756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.363884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.363909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.364040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.364197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.364222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.364391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.364534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.364559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.364692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.364813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.364838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.365000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.365188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.365213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.365338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.365496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.365521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.365671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.365806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.365830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.047 qpair failed and we were unable to recover it. 00:27:04.047 [2024-07-10 15:50:43.365962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.366123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.047 [2024-07-10 15:50:43.366147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-07-10 15:50:43.366284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.366410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.366447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-07-10 15:50:43.366622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.366760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.366786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-07-10 15:50:43.366963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.367120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.367145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-07-10 15:50:43.367300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.367460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.367486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-07-10 15:50:43.367653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.367800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.367824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-07-10 15:50:43.368006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.368164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.368189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-07-10 15:50:43.368321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.368456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.368485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-07-10 15:50:43.368645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.368783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.368810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-07-10 15:50:43.368946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.369097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.369123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-07-10 15:50:43.369254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.369385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.369411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-07-10 15:50:43.369547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.369702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.369726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-07-10 15:50:43.369861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.370020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.370044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-07-10 15:50:43.370194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.370368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.048 [2024-07-10 15:50:43.370395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.048 qpair failed and we were unable to recover it. 00:27:04.048 [2024-07-10 15:50:43.370567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.370707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.370732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.370867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.371031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.371056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.371186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.371343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.371368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.371531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.371666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.371700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.371836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.371970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.371997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.372179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.372314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.372339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.372470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.372632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.372664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.372797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.372954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.372981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.373123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.373285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.373310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.373447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.373594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.373620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.373756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.373908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.373939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.374069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.374204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.374228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.374359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.374493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.374524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.374665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.374791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.374816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.374982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.375147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.375172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.375330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.375467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.375493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.375656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.375826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.375851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.376010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.376167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.376192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.376326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.376496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.376522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.376656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.376795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.376820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.376957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.377120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.377145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.377319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.377457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.377484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.377649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.377812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.377836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.377972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.378126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.378150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.378282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.378445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.378470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.378634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.378787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.378812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.378998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.379126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.379150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.379337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.379463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.379488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.379648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.379778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.379803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.379933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.380089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.380114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.380280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.380401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.380432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.380595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.380728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.380753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.380883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.381018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.381042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.381178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.381304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.381330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.381488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.381623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.381648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.381781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.381940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.381965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.382127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.382275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.382300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.382459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.382621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.382646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.382789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.382920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.382944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.383119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.383305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.383330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.383461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.383586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.383611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.383770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.383894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.383919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.384093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.384221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.384246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.384439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.384580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.384606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.384793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.384950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.384974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.385117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.385299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.385324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.385458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.385583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.385609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.385765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.385922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.385947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.386100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.386225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.386249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.386407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.386572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.386597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.386781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.386918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.386943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.387081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.387231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.387256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.387417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.387570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.387595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.387753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.387883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.387908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.388074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.388205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.388230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.388391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.388553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.316 [2024-07-10 15:50:43.388579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.316 qpair failed and we were unable to recover it. 00:27:04.316 [2024-07-10 15:50:43.388713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.388867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.388891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.389022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.389152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.389177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.389344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.389502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.389528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.389661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.389815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.389839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.389995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.390157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.390182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.390345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.390477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.390502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.390640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.390795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.390820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.390976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.391130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.391158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.391293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.391461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.391487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.391624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.391746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.391771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.391915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.392065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.392089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.392239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.392361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.392386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.392549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.392709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.392734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.392859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.393001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.393026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.393161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.393295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.393320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.393453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.393604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.393629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.393754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.393882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.393907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.394062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.394190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.394220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.394350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.394489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.394514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.394674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.394804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.394828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.394971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.395143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.395167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.395309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.395460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.395485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.395633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.395789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.395814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.395962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.396118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.396142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.396301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.396463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.396489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.396637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.396763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.396787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.396919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.397049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.397073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.397222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.397343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.397368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.397502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.397637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.397662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.397830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.397980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.398005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.398140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.398282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.398307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.398471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.398602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.398627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.398755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.398913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.398937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.399062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.399222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.399246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.399383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.399560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.399585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.399731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.399904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.399929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.400091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.400249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.400274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.400407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.400578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.400603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.400747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.400880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.400905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.401063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.401224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.401249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.401391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.401534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.401559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.401690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.401827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.401851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.401974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.402146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.402171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.402305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.402489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.402515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.402669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.402854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.402879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.403050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.403207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.403231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.403361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.403500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.403526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.403696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.403825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.403849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.403976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.404119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.404143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.404272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.404445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.404470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.404629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.404752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.404776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.404913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.405057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.405081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.405218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.405361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.405386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.405523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.405648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.405673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.405807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.405996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.406020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.317 qpair failed and we were unable to recover it. 00:27:04.317 [2024-07-10 15:50:43.406151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.317 [2024-07-10 15:50:43.406272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.406297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.318 qpair failed and we were unable to recover it. 00:27:04.318 [2024-07-10 15:50:43.406443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.406608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.406633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.318 qpair failed and we were unable to recover it. 00:27:04.318 [2024-07-10 15:50:43.406774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.406964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.406989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.318 qpair failed and we were unable to recover it. 00:27:04.318 [2024-07-10 15:50:43.407118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.407255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.407280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.318 qpair failed and we were unable to recover it. 00:27:04.318 [2024-07-10 15:50:43.407451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.407575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.407600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.318 qpair failed and we were unable to recover it. 00:27:04.318 [2024-07-10 15:50:43.407753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.407896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.407921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.318 qpair failed and we were unable to recover it. 00:27:04.318 [2024-07-10 15:50:43.408052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.408179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.408204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.318 qpair failed and we were unable to recover it. 00:27:04.318 [2024-07-10 15:50:43.408363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.408502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.408527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.318 qpair failed and we were unable to recover it. 00:27:04.318 [2024-07-10 15:50:43.408669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.408820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.408844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.318 qpair failed and we were unable to recover it. 00:27:04.318 [2024-07-10 15:50:43.408974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.409104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.409128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.318 qpair failed and we were unable to recover it. 00:27:04.318 [2024-07-10 15:50:43.409286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.409411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.409440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.318 qpair failed and we were unable to recover it. 00:27:04.318 [2024-07-10 15:50:43.409570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.409766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.409790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.318 qpair failed and we were unable to recover it. 00:27:04.318 [2024-07-10 15:50:43.409936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.410124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.410149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.318 qpair failed and we were unable to recover it. 00:27:04.318 [2024-07-10 15:50:43.410290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.410429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.410460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.318 qpair failed and we were unable to recover it. 00:27:04.318 [2024-07-10 15:50:43.410605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.410734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.410760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.318 qpair failed and we were unable to recover it. 00:27:04.318 [2024-07-10 15:50:43.410922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.411057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.411083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.318 qpair failed and we were unable to recover it. 00:27:04.318 [2024-07-10 15:50:43.411242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.411398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.411423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.318 qpair failed and we were unable to recover it. 00:27:04.318 [2024-07-10 15:50:43.411572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.411693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.318 [2024-07-10 15:50:43.411718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.411866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.411993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.412017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.412177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.412315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.412342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.412484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.412617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.412642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.412783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.412940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.412964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.413116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.413247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.413271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.413418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.413610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.413635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.413763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.413895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.413920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.414055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.414203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.414228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.414349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.414493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.414519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.414665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.414823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.414848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.414992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.415130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.415155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.415295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.415454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.415479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.415618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.415755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.415781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.415912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.416074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.416099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.416241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.416396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.416420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.416581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.416723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.416747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.416889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.417050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.417075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.417211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.417340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.417366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.417568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.417705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.417730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.417891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.418049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.418074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.418210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.418339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.418363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.418504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.418642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.418667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.418808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.418941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.418966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.419128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.419299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.419324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.419484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.419643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.419667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.419805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.419962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.419987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.420150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.420283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.420308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.420453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.420597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.420622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.420810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.420944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.420971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.421133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.421267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.421292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.421417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.421569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.421594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.421733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.421861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.421886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.422041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.422183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.422207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.422350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.422489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.422515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.422669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.422839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.422864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.423000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.423130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.423154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.423283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.423448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.423474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.423618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.423790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.423815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.319 [2024-07-10 15:50:43.423961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.424090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.319 [2024-07-10 15:50:43.424115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.319 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.424265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.424414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.424443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.424585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.424727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.424753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.424914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.425052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.425076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.425236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.425375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.425400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.425562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.425726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.425753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.425891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.426025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.426050] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.426202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.426360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.426386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.426528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.426671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.426700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.426837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.426995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.427029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.427160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.427307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.427332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.427461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.427626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.427651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.427785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.427950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.427976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.428109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.428263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.428288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.428437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.428578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.428603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.428755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.428887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.428911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.429037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.429180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.429205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.429361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.429498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.429524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.429653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.429802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.429826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.429991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.430153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.430178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.430315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.430473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.430498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.430631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.430767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.430791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.430923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.431057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.431081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.431231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.431363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.431388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.431566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.431733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.431758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.431895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.432054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.432079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.432218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.432367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.432392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.432544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.432672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.432697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.432824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.432968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.432993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.433160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.433304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.433328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.433470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.433600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.433625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.433798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.433942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.433967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.434099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.434237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.434261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.434412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.434558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.434583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.434725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.434856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.434881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.435026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.435217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.435241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.435372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.435507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.435534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.435669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.435809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.435834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.435967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.436124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.436148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.436303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.436454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.436490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.436648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.436802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.436828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.436989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.437121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.437145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.437289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.437419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.437451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.437639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.437786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.437812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.437949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.438135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.438160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.438291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.438422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.438452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.438627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.438794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.438819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.438984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.439129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.439155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.439285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.439417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.439449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.439627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.439771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.439799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.439948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.440079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.440104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.440254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.440387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.440412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.440588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.440735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.440760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.440907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.441078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.441103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.441266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.441391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.441416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.441571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.441737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.441762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.441906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.442036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.442061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.442218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.442370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.442395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.442533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.442663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.320 [2024-07-10 15:50:43.442688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.320 qpair failed and we were unable to recover it. 00:27:04.320 [2024-07-10 15:50:43.442829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.442963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.442989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.443117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.443268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.443292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.443463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.443597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.443623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.443755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.443878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.443903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.444065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.444195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.444220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.444358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.444492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.444517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.444653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.444789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.444816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.444980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.445106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.445130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.445273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.445403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.445434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.445608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.445735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.445760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.445932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.446070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.446094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.446239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.446371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.446396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.446540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.446679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.446705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.446833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.446963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.446988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.447145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.447276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.447301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.447463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.447598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.447624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.447779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.447943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.447968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.448116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.448249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.448274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.448408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.448576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.448602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.448738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.448901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.448928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.449061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.449208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.449237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.449400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.449530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.449563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.449725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.449885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.449910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.450057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.450198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.450223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.450386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.450521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.450547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.450677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.450839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.450865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.450995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.451155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.451179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.451342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.451479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.451505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.451637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.451784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.451809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.451937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.452093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.452117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.452293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.452421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.452451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.452594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.452744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.452769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.452898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.453027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.453052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.453200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.453377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.453402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.453536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.453674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.453699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.453837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.453971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.453996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.454128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.454275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.454299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.454444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.454574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.454599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.454738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.454902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.454926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.455087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.455217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.455241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.455379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.455537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.455562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.455735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.455865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.455889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.456045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.456198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.456223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.321 qpair failed and we were unable to recover it. 00:27:04.321 [2024-07-10 15:50:43.456360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.456498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.321 [2024-07-10 15:50:43.456524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.456660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.456799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.456824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.456969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.457105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.457130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.457273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.457435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.457462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.457606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.457752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.457776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.457919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.458050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.458075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.458215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.458347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.458371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.458506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.458643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.458668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.458826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.458985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.459020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.459198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.459341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.459368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.459560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.459752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.459783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.459937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.460081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.460111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.460261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.460413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.460453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.460612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.460754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.460785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.460955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.461140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.461168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.461346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.461516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.461544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.461717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.461855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.461881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.462025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.462187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.462212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.462360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.462503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.462532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.462669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.462834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.462859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.462996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.463135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.463161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.463328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.463469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.463495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.463636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.463771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.463796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.463929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.464061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.464085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.464241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.464405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.464443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.464589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.464750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.464776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.464913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.465059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.465084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.465220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.465352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.465377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.465507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.465644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.465670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.465819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.465996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.466021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.466182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.466321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.466348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.466499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.466635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.466661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.466790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.466915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.466940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.467107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.467239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.467264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.467432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.467566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.467592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.467731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.467869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.467893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.468045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.468187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.468212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.468380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.468546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.468572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.468709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.468855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.468881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.469014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.469179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.469206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.469351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.469495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.469522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.469685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.469816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.469841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.469976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.470107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.470133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.470266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.470387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.470412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.470547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.470679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.470704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.470846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.470981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.471006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.471149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.471284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.471310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.471465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.471617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.471645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.471787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.471947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.471972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.472106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.472274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.472299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.472453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.472589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.472614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.472772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.472898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.472923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.473096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.473228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.473253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.473389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.473527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.473552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.473696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.473828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.473853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.322 qpair failed and we were unable to recover it. 00:27:04.322 [2024-07-10 15:50:43.473984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.322 [2024-07-10 15:50:43.474111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.474136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.474267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.474395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.474422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.474565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.474697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.474722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.474867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.475005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.475036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.475224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.475376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.475401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.475572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.475704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.475728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.475867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.475995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.476020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.476150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.476313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.476338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.476477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.476602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.476628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.476778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.476938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.476963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.477097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.477234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.477260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.477406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.477569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.477595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.477730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.477860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.477884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.478023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.478158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.478184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.478353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.478489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.478515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.478644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.478814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.478839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.478971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.479102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.479127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.479296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.479423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.479460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.479660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.479789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.479814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.479944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.480078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.480103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.480234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.480372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.480397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.480556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.480679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.480704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.480850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.480992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.481017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.481152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.481295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.481319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.481500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.481634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.481658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.481790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.481915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.481940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.482084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.482236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.482261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.482460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.482593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.482618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.482765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.482896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.482921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.483056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.483219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.483244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.483391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.483549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.483575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.483711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.483846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.483871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.484002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.484133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.484157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.484303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.484432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.484457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.484601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.484759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.484786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.484923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.485068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.485093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.485225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.485391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.485416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.485556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.485710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.485735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.485919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.486049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.486076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.486212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.486351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.486376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.486509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.486652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.486677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.486822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.486981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.487006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.487140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.487284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.323 [2024-07-10 15:50:43.487309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.323 qpair failed and we were unable to recover it. 00:27:04.323 [2024-07-10 15:50:43.487446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.487587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.487612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.487745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.487885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.487911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.488070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.488253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.488278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.488429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.488553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.488578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.488709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.488831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.488856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.489002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.489189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.489214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.489346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.489505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.489531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.489661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.489833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.489858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.490019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.490163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.490188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.490334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.490491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.490517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.490674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.490833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.490859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.491023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.491180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.491205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.491339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.491471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.491497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.491621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.491749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.491774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.491950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.492103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.492128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.492255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.492430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.492465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.492625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.492764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.492789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.492913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.493091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.493116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.493305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.493446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.493473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.493631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.493762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.493787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.493941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.494083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.494108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.494267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.494441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.494468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.494612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.494781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.494806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.494936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.495123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.495147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.495276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.495395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.495419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.495558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.495681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.495706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.495862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.496032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.496057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.496181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.496337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.496362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.496497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.496638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.496663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.496792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.496912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.496937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.497113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.497242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.497267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.497397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.497535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.497560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.497706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.497870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.497895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.498040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.498175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.498200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.498330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.498494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.498520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.498659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.498799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.498823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.498966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.499105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.499130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.499313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.499445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.499471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.499602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.499749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.499774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.499901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.500023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.500048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.500193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.500316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.500340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.500507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.500650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.500679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.500816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.500950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.500974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.501121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.501250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.501274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.501416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.501584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.501609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.501738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.501893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.324 [2024-07-10 15:50:43.501919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.324 qpair failed and we were unable to recover it. 00:27:04.324 [2024-07-10 15:50:43.502075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.502206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.502231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.502392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.502535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.502560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.502693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.502825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.502850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.502978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.503136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.503160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.503294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.503416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.503456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.503620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.503775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.503799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.503932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.504082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.504107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.504245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.504418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.504449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.504582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.504736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.504761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.504910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.505032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.505057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.505194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.505330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.505356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.505516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.505654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.505679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.505803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.505931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.505955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.506081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.506208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.506233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.506378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.506525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.506550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.506684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.506813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.506838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.506974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.507145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.507170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.507295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.507484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.507509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.507665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.507835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.507860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.507988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.508173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.508198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.508330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.508469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.508494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.508627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.508783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.508808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.508983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.509107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.509131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.509289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.509421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.509454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.509627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.509763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.509789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.509925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.510085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.510109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.510275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.510397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.510421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.510563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.510700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.510725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.510886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.511012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.511037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.511164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.511320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.511344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.511482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.511654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.511679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.511824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.511952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.511977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.512105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.512233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.512257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.512383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.512530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.512556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.512713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.512897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.512922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.513043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.513176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.513201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.513327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.513495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.513520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.513651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.513811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.513835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.513965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.514101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.514127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.514272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.514401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.514430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.514595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.514727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.514752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.514884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.515048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.515072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.515200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.515329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.515353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.515486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.515620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.515646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.515781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.515902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.515926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.516051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.516177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.516201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.516360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.516482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.516511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.516674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.516836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.516862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.517013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.517185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.517210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.517334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.517475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.517501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.325 qpair failed and we were unable to recover it. 00:27:04.325 [2024-07-10 15:50:43.517633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.325 [2024-07-10 15:50:43.517766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.517791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.517951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.518094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.518118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.518279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.518440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.518466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.518610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.518748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.518772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.518922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.519108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.519133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.519295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.519437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.519462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.519620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.519768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.519792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.519944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.520084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.520108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.520246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.520381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.520406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.520565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.520720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.520744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.520875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.521031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.521055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.521230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.521385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.521409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.521541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.521672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.521697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.521837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.521975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.521999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.522173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.522330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.522354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.522510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.522639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.522664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.522818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.523007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.523031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.523194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.523353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.523378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.523543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.523676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.523701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.523829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.523959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.523985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.524119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.524282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.524306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.524468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.524604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.524629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.524762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.524917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.524942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.525104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.525259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.525283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.525412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.525565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.525590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.525778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.525910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.525935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.526109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.526249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.526275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.526436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.526563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.526588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.526716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.526854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.526879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.527016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.527179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.527205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.527366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.527519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.527544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.527732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.527863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.527888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.528049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.528211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.528236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.528390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.528539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.528564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.528696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.528858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.528883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.529018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.529173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.529198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.529325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.529473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.529499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.529635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.529773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.529800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.529929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.530082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.530107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.530243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.530372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.530397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.530527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.530660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.530685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.530827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.530991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.531017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.531178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.531312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.531338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.531469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.531598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.531623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.531764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.531920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.531945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.532072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.532195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.532220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.532358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.532503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.532528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.532687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.532839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.532870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.532997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.533183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.533208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.326 qpair failed and we were unable to recover it. 00:27:04.326 [2024-07-10 15:50:43.533334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.533474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.326 [2024-07-10 15:50:43.533500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.533632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.533767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.533794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.533936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.534060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.534085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.534230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.534383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.534408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.534552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.534711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.534736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.534858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.535010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.535035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.535192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.535321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.535348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.535481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.535618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.535643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.535812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.535932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.535961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.536106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.536229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.536253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.536412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.536564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.536588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.536726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.536886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.536915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.537049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.537210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.537234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.537394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.537565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.537590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.537762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.537887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.537912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.538037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.538195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.538220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.538393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.538540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.538565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.538698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.538854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.538879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.539067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.539201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.539226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.539378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.539531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.539557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.539691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.539833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.539857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.540011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.540138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.540163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.540321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.540493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.540519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.540661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.540787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.540812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.540934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.541088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.541113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.541278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.541412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.541442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.541578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.541705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.541730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.541860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.541993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.542018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.542182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.542321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.542346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.542493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.542627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.542651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.542775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.542957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.542981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.543143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.543271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.543295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.543437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.543570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.543594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.543720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.543877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.543902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.544062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.544196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.544222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.544363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.544521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.544547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.544674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.544806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.544830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.544961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.545116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.545141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.545289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.545431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.545457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.545594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.545732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.545756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.545898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.546055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.546079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.546210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.546377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.546401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.546548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.546672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.546696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.546825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.546979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.547004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.547128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.547313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.547337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.327 [2024-07-10 15:50:43.547470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.547600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.327 [2024-07-10 15:50:43.547625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.327 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.547766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.547936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.547961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.548119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.548250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.548275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.548398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.548547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.548572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.548697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.548821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.548845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.549001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.549159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.549183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.549340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.549486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.549513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.549691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.549832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.549857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.550003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.550133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.550158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.550296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.550440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.550465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.550620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.550744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.550769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.550893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.551052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.551077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.551236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.551360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.551384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.551577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.551713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.551737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.551889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.552029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.552058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.552219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.552372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.552396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.552539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.552666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.552691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.552879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.553019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.553044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.553201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.553363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.553389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.553530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.553686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.553711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.553855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.554004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.554029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.554160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.554303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.554328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.554491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.554650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.554675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.554817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.554969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.554994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.555140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.555263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.555288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.555454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.555605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.555629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.555762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.555903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.555927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.556064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.556219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.556243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.556382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.556554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.556579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.556737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.556872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.556896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.557020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.557172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.557197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.557336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.557502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.557528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.557661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.557788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.557814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.557947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.558113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.558138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.558262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.558391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.558415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.558592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.558764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.558788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.558916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.559045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.559069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.559243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.559401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.559433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.559569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.559699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.559725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.559855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.560019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.560044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.560205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.560339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.560364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.560499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.560671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.560696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.560848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.561002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.561027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.561175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.561316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.561341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.561506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.561633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.561657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.561787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.561922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.561946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.562083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.562242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.562266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.562391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.562548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.562573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.562707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.562868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.562892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.563067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.563201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.563225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.563392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.563538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.563563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.563686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.563811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.563835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.564007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.564149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.564174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.564328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.564488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.564514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.564649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.564778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.564803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.564980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.565109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.565134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.565291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.565478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.565503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.328 qpair failed and we were unable to recover it. 00:27:04.328 [2024-07-10 15:50:43.565638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.565767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.328 [2024-07-10 15:50:43.565792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.565955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.566079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.566103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.566229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.566387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.566412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.566554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.566723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.566747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.566915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.567049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.567076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.567209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.567360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.567385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.567529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.567679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.567703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.567860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.567996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.568022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.568179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.568348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.568376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.568506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.568647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.568672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.568833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.568961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.568985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.569129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.569257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.569282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.569464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.569593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.569618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.569750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.569878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.569903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.570033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.570165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.570189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.570316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.570478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.570505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.570638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.570764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.570789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.570913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.571044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.571070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.571241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.571392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.571416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.571570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.571698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.571722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.571881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.572052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.572077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.572226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.572364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.572389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.572541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.572676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.572702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.572831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.572972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.572996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.573151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.573274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.573299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.573455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.573614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.573638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.573790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.573921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.573947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.574139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.574274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.574298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.574432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.574576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.574600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.574743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.574886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.574910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.575042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.575198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.575222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.575356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.575511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.575537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.575682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.575811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.575836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.575970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.576095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.576119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.576304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.576458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.576483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.576609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.576793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.576817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.576958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.577117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.577142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.577267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.577455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.577480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.577615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.577755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.577779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.577945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.578088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.578113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.578259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.578396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.578420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.578586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.578728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.578753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.578911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.579043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.579067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.579238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.579389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.579413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.579591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.579717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.579741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.329 qpair failed and we were unable to recover it. 00:27:04.329 [2024-07-10 15:50:43.579872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.580035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.329 [2024-07-10 15:50:43.580060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.580191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.580319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.580343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.580485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.580624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.580649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.580778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.580899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.580923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.581077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.581225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.581250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.581387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.581537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.581562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.581722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.581876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.581901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.582064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.582242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.582267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.582430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.582562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.582588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.582716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.582845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.582870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.583027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.583179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.583203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.583325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.583485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.583510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.583656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.583829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.583854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.583984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.584107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.584131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.584266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.584452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.584481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.584617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.584746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.584771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.584928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.585099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.585123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.585247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.585411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.585443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.585611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.585742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.585767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.585897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.586057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.586082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.586236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.586388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.586412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.586546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.586705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.586730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.586862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.586982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.587006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.587189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.587353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.587377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.587521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.587695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.587724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.587868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.588023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.588048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.588205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.588375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.588400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.588557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.588681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.588706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.588867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.589016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.589041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.589201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.589332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.589357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.589505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.589660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.589685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.589813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.589943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.589968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.590131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.590261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.590287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.590434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.590562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.590587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.590716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.590837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.590861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.590987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.591152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.591177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.591337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.591466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.591491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.591630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.591760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.591784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.591913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.592060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.592084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.592239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.592372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.592396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.592565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.592706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.592731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.592916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.593048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.593077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.593272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.593402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.593433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.593563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.593716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.593741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.593874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.594032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.594057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.594222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.594358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.594383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.594524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.594655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.594680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.594817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.594946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.594970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.595116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.595301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.595326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.595493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.595654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.595679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.595852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.596007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.596031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.596167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.596300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.596324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.596465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.596618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.596643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.596768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.596928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.596954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.597089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.597245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.597269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.597403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.597549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.597576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.597704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.597868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.597893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.598051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.598177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.598202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.598337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.598467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.598492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.330 qpair failed and we were unable to recover it. 00:27:04.330 [2024-07-10 15:50:43.598620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.330 [2024-07-10 15:50:43.598744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.598769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.598916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.599077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.599101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.599260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.599414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.599443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.599570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.599725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.599750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.599911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.600036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.600060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.600185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.600338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.600362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.600517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.600646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.600670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.600870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.601001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.601026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.601165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.601323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.601348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.601470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.601609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.601634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.601765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.601894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.601919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.602081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.602211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.602238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.602368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.602500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.602525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.602686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.602807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.602831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.602974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.603112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.603136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.603290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.603417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.603446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.603612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.603775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.603803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.603927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.604089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.604113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.604255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.604413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.604454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.604616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.604746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.604771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.604901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.605035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.605061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.605198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.605323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.605348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.605509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.605664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.605689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.605821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.605943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.605968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.606126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.606274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.606298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.606432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.606573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.606597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.606721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.606853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.606878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.607023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.607194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.607220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.607382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.607559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.607584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.607742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.607898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.607922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.608047] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.608182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.608206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.608369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.608531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.608558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.608731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.608866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.608891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.609045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.609178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.609203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.609378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.609509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.609534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.609675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.609800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.609825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.609984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.610116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.610141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.610285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.610415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.610445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.610586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.610715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.610741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.610917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.611051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.611076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.611202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.611329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.611353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.611484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.611643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.611668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.611795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.611918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.611943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.612101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.612242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.612267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.612410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.612568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.612593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.612725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.612849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.612874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.613017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.613173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.613198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.613329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.613509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.613535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.613669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.613809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.613834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.613968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.614128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.614152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.614288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.614419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.614448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.331 qpair failed and we were unable to recover it. 00:27:04.331 [2024-07-10 15:50:43.614591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.614722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.331 [2024-07-10 15:50:43.614747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.614911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.615056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.615080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.615231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.615357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.615383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.615565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.615690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.615715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.615849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.615999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.616024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.616210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.616338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.616363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.616522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.616680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.616704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.616869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.617018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.617043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.617183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.617343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.617367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.617492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.617646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.617670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.617799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.617955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.617980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.618112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.618278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.618305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.618490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.618628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.618653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.618818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.618938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.618962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.619118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.619245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.619269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.619436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.619580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.619605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.619734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.619860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.619889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.620026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.620152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.620176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.620306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.620465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.620490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.620625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.620757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.620781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.620942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.621097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.621122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.621268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.621450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.621475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.621643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.621790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.621815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.621937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.622093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.622118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.622242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.622400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.622430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.622608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.622738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.622762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.622946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.623073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.623097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.623230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.623415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.623447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.623571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.623744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.623768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.623928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.624057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.624082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.624223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.624383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.624407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.624558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.624744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.624769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.624924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.625043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.625068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.625230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.625374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.625399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.625569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.625705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.625729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.625868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.626000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.626025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.626191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.626332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.626356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.626507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.626631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.626656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.626815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.626970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.626995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.627131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.627295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.627320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.627456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.627598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.627623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.627751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.627905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.627930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.628082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.628215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.628240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.628413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.628572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.628597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.628733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.628899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.628924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.629048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.629183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.629207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.629364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.629528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.629554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.332 qpair failed and we were unable to recover it. 00:27:04.332 [2024-07-10 15:50:43.629716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.332 [2024-07-10 15:50:43.629860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.629885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.630040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.630167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.630192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.630352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.630488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.630513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.630643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.630766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.630791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.630917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.631057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.631082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.631239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.631362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.631387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.631548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.631677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.631702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.631830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.631960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.631986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.632152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.632291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.632316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.632448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.632578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.632603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.632733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.632894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.632919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.633071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.633203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.633228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.633366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.633523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.633548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.633702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.633837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.633861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.634008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.634152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.634176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.634338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.634507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.634532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.634674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.634833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.634858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.635003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.635130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.635155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.635312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.635473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.635499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.635659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.635787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.635811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.635959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.636113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.636141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.636266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.636428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.636453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.636585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.636727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.636751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.636914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.637050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.637076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.637203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.637330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.637354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.637481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.637637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.637662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.637797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.637956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.637981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.638130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.638286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.638310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.638444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.638618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.638643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.638804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.638935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.638960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.639111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.639280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.639305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.639468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.639604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.639629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.639762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.639934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.639959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.640087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.640213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.640237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.640389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.640549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.640574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.640703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.640874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.640899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.641026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.641179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.641203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.641370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.641494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.641519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.641658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.641809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.641834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.641962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.642083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.642107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.642273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.642405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.642436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.642575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.642749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.642774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.642901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.643056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.643080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.643240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.643410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.643440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.643571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.643712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.643736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.643909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.644035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.644060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.644210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.644364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.644388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.644543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.644706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.644731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.644888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.645074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.645098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.645222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.645390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.645415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.645585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.645710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.645734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.645896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.646025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.646049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.646234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.646374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.646400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.646572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.646737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.646763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.646901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.647026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.647051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.647177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.647320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.647345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.647478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.647616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.647640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.647799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.647955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.647979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.648135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.648258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.648282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.333 qpair failed and we were unable to recover it. 00:27:04.333 [2024-07-10 15:50:43.648446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.333 [2024-07-10 15:50:43.648598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.648622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.648752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.648923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.648948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.649077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.649211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.649236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.649366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.649509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.649534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.649680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.649868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.649892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.650021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.650152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.650177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.650310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.650500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.650526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.650675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.650796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.650820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.651060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.651297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.651321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.651484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.651613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.651638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.651789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.651919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.651944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.652072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.652197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.652221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.652351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.652491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.652520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.652718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.652851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.652876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.653004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.653146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.653170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.653329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.653490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.653515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.653656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.653812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.653837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.653994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.654127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.654152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.654315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.654445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.654471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.654707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.654849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.654874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.655007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.655171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.655195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.655329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.655491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.655517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.655646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.655784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.655813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.655971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.656141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.656166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.656329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.656471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.656498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.656631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.656763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.656787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.656942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.657067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.657091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.657253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.657440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.657465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.657596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.657735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.657760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.657888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.658017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.658043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.658238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.658369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.658394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.658550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.658688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.658713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.658841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.659000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.659024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.659160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.659290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.659314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.659477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.659631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.659656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.659788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.659975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.660000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.660186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.660312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.660337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.660533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.660666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.660691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.660827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.660952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.660978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.661116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.661275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.661299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.661469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.661623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.661648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.661788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.661960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.661984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.662116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.662294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.662319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.662471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.662603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.334 [2024-07-10 15:50:43.662628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.334 qpair failed and we were unable to recover it. 00:27:04.334 [2024-07-10 15:50:43.662785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.662916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.662942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.663094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.663231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.663256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.663411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.663548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.663575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.663711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.663864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.663889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.664053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.664195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.664220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.664362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.664499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.664525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.664682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.664842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.664866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.665004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.665143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.665167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.665323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.665464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.665490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.665621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.665756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.665781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.665913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.666055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.666080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.666204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.666379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.666404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.666539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.666672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.666697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.666873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.667013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.667038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.667183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.667303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.667327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.667486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.667622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.667647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.667779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.667914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.667939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.668080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.668213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.668237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.668365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.668493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.668519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.668648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.668835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.668861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.668987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.669132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.669157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.669294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.669457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.669482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.669622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.669752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.669777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.669922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.670067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.670092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.670228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.670349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.670374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.670530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.670680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.670704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.670862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.670991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.671016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.671140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.671283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.671307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.671437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.671562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.671588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.671742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.671887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.671915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.672045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.672174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.672200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.672391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.672542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.672568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.672698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.672827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.672852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.672994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.673150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.673174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.673316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.673485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.673511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.673654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.673836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.673860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.674005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.674133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.674158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.674330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.674454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.674479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.674617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.674747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.674771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.674912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.675042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.675066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.675234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.675391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.675415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.675552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.675677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.675701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.675829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.675990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.676016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.676158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.676282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.676306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.676449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.676622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.676647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.676788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.676929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.676953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.677101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.677262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.677287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.677417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.677560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.677585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.677763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.677933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.677963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.678121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.678273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.678298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.678449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.678586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.678612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.678751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.678903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.678928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.679106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.679270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.679298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.679444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.679603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.679629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.679764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.679901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.679925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.680062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.680218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.680243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.335 qpair failed and we were unable to recover it. 00:27:04.335 [2024-07-10 15:50:43.680379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.680522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.335 [2024-07-10 15:50:43.680547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-07-10 15:50:43.680683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-07-10 15:50:43.680832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.336 [2024-07-10 15:50:43.680858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.336 qpair failed and we were unable to recover it. 00:27:04.336 [2024-07-10 15:50:43.681000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.681160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.681186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.681322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.681482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.681508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.681658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.681809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.681833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.681981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.682117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.682151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.682308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.682465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.682491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.682629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.682775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.682802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.682973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.683131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.683158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.683357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.683495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.683524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.683679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.683810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.683839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.684017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.684164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.684191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.684370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.684523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.684549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.684711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.684876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.684901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.685048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.685186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.685213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.685344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.685510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.685537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.685672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.685815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.685841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.685988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.686131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.686156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.686314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.686474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.686500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.686634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.686782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.686806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.686945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.687120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.687144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.687304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.687437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.687462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.687593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.687729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.687753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.687884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.688050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.688074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.688248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.688384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.688412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.688551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.688677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.688701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.688831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.688963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.688990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.689129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.689257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.689281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.689422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.689568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.689593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.689729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.689872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.689897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.690029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.690173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.690198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.690327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.690477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.690503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.690693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.690831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.690856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.690994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.691143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.691168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.691299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.691463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.691488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.691646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.691796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.691821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.691965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.692150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.692175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.692320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.692456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.692482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.692613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.692742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.692766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.692930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.693066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.693091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.609 qpair failed and we were unable to recover it. 00:27:04.609 [2024-07-10 15:50:43.693223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.609 [2024-07-10 15:50:43.693365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.693390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.693536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.693679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.693703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.693834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.693967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.693992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.694120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.694248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.694273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.694421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.694588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.694614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.694746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.694908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.694933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.695076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.695207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.695232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.695371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.695506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.695533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.695675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.695809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.695835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.695961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.696094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.696118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.696278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.696408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.696437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.696573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.696730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.696754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.696884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.697020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.697044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.697171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.697304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.697330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.697498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.697633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.697658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.697829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.698006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.698034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.698175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.698314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.698339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.698492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.698623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.698649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.698782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.698945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.698971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.699116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.699256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.699281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.699451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.699581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.699607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.699737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.699907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.699932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.700079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.700220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.700246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.700418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.700565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.700591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.700721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.700847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.700875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.701062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.701230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.701264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.701422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.701569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.701601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.701750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.701937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.701965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.702130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.702264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.702294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.702445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.702593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.702625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.702766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.702903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.702940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.703085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.703272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.703301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.703485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.703621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.703652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.703797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.703965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.703992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.704161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.704350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.704378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.704529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.704689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.704732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.704885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.705040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.705075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.705233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.705378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.705406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.705553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.705694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.705727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.610 qpair failed and we were unable to recover it. 00:27:04.610 [2024-07-10 15:50:43.705866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.706013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.610 [2024-07-10 15:50:43.706042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.706208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.706338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.706369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.706545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.706717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.706744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.706922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.707083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.707109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.707271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.707421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.707459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.707651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.707799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.707828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.707975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.708140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.708165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.708307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.708461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.708488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.708641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.708775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.708800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.708934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.709058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.709084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.709246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.709374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.709398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.709548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.709691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.709716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.709882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.710039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.710064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.710210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.710358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.710384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.710527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.710678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.710703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.710884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.711037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.711062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.711214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.711348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.711374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.711515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.711652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.711678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.711806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.711964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.711988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.712126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.712254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.712279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.712415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.712586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.712611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.712741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.712885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.712909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.713070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.713221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.713246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.713388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.713552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.713578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.713714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.713851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.713877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.714065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.714219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.714245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.714374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.714550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.714577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.714708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.714840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.714865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.715013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.715191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.715216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.715346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.715487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.715513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.715648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.715786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.715810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.715954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.716121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.716146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.716301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.716466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.716491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.716625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.716773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.716797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.716931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.717061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.717085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.717222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.717358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.717385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.717527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.717671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.717701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.717841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.717973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.717998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.718152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.718285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.718310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.611 qpair failed and we were unable to recover it. 00:27:04.611 [2024-07-10 15:50:43.718462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.611 [2024-07-10 15:50:43.718611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.718636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.718768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.718897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.718924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.719061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.719188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.719215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.719346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.719500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.719525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.719657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.719794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.719820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.719985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.720114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.720138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.720291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.720429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.720454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.720586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.720742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.720770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.720909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.721073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.721099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.721238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.721370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.721394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.721535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.721665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.721691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.721854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.721986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.722011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.722133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.722268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.722293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.722436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.722576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.722603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.722775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.722949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.722974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.723217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.723358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.723383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.723537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.723672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.723699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.723894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.724028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.724057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.724192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.724341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.724366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.724520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.724654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.724678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.724811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.724943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.724967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.725208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.725347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.725372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.725517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.725663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.725689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.725830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.725966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.725993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.726138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.726267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.726294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.726422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.726567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.726591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.726721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.726844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.726868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.727008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.727162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.727193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.727351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.727480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.727505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.727643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.727787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.727811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.727938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.728060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.728084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.728238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.728368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.728393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.728568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.728695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.728719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.612 qpair failed and we were unable to recover it. 00:27:04.612 [2024-07-10 15:50:43.728848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.612 [2024-07-10 15:50:43.729001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.729027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.729170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.729328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.729353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.729514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.729660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.729684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.729823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.729952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.729976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.730131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.730319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.730344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.730505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.730628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.730654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.730840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.730994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.731018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.731180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.731337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.731362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.731495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.731619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.731644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.731831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.731989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.732013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.732191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.732359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.732384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.732548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.732677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.732701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.732826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.732958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.732984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.733145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.733278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.733302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.733442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.733572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.733597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.733769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.733936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.733960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.734083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.734227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.734251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.734443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.734602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.734626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.734800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.734956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.734981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.735139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.735292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.735316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.735454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.735589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.735613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.735750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.735882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.735909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.736070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.736196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.736221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.736357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.736510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.736535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.736678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.736807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.736833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.736986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.737174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.737198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.737337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.737473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.737499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.737649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.737808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.737833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.738015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.738138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.738162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.738300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.738441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.738467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.738629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.738787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.738812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.738955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.739082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.739107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.739295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.739436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.739461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.739585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.739747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.739772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.739901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.740027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.740052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.740193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.740322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.740347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.740537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.740669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.740694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.740849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.740993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.741018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.741263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.741394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.741419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.613 qpair failed and we were unable to recover it. 00:27:04.613 [2024-07-10 15:50:43.741673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.613 [2024-07-10 15:50:43.741832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.741858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.741989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.742156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.742181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.742312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.742471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.742496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.742656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.742778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.742803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.742959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.743094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.743120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.743287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.743411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.743440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.743607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.743738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.743764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.743925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.744098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.744122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.744275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.744412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.744441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.744618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.744760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.744785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.744959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.745083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.745108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.745272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.745395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.745420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.745603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.745739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.745765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.745894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.746065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.746090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.746226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.746348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.746372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.746561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.746721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.746745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.746914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.747079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.747106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.747241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.747433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.747458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.747589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.747751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.747776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.747903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.748055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.748079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.748234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.748407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.748437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.748599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.748742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.748767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.748913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.749073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.749098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.749225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.749359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.749384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.749523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.749648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.749673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.749915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.750069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.750094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.750247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.750402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.750432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.750621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.750860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.750885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.751050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.751179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.751204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.751338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.751510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.751536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.751665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.751824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.751849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.752039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.752197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.752222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.752379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.752555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.752581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.752740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.752864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.752889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.753019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.753164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.753189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.753354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.753514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.753539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.753712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.753845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.614 [2024-07-10 15:50:43.753871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.614 qpair failed and we were unable to recover it. 00:27:04.614 [2024-07-10 15:50:43.754013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.618 [2024-07-10 15:50:43.754171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.754196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.754361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.754544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.754573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.754707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.754865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.754890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.755056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.755186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.755212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.755372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.755535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.755561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.755725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.755885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.755911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.756045] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.756171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.756196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.756358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.756493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.756519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.756680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.756814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.756839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.757002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.757138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.757164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.757306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.757494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.757520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.757649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.757799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.757825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.757984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.758127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.758153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.758313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.758480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.758505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.758663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.758786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.758811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.758966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.759101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.759126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.759288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.759459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.759485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.759623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.759784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.759810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.759972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.760102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.760128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.760257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.760412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.760443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.760605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.760733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.760758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.760932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.761084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.761109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.761243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.761374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.761399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.761568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.761709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.761736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.761895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.762053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.762078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.762240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.762367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.762392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.762539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.762664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.762689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.762824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.762982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.763007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.763169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.763342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.763367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.763534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.763702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.763728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.763858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.764016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.764042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.764178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.764335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.764360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.619 qpair failed and we were unable to recover it. 00:27:04.619 [2024-07-10 15:50:43.764502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.619 [2024-07-10 15:50:43.764646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.764672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.764795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.764950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.764976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.765153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.765281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.765306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.765468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.765601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.765628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.765764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.765893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.765918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.766055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.766204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.766229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.766357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.766494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.766519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.766681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.766842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.766872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.767034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.767190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.767215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.767344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.767482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.767508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.767673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.767821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.767847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.767998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.768165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.768190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.768354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.768524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.768550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.768722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.768865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.768890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.769052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.769206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.769231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.769365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.769491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.769517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.769643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.769796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.769821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.769959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.770091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.770123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.770297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.770455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.770481] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.770639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.770779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.770803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.770963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.771147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.771172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.771303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.771437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.771463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.771617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.771742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.771767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.771928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.772062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.772088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.772218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.772375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.772401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.772573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.772713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.772738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.772868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.772999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.773024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.773194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.773330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.773361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.773496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.773635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.773661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.773795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.773931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.773956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.774113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.774272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.774297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.774430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.774571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.774598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.774746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.774914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.774941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.775071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.775199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.775225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.775371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.775529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.775555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.775718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.775847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.775874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.776048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.776178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.776203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.776364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.776492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.776522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.620 qpair failed and we were unable to recover it. 00:27:04.620 [2024-07-10 15:50:43.776716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.776873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.620 [2024-07-10 15:50:43.776899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.777031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.777215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.777241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.777372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.777507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.777533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.777695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.777849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.777875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.778006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.778165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.778190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.778344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.778474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.778500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.778663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.778836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.778861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.779019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.779189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.779214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.779343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.779505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.779531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.779721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.779869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.779894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.780027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.780196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.780220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.780378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.780528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.780554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.780687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.780845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.780870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.781007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.781134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.781159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.781282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.781467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.781493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.781631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.781754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.781780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.781904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.782034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.782060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.782183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.782339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.782365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.782529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.782653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.782678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.782832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.782966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.782992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.783121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.783292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.783317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.783489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.783660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.783687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.783833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.783992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.784018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.784179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.784323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.784350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.784501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.784628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.784653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.784825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.784967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.784992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.785154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.785308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.785333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.785467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.785640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.785666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.785797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.785952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.785978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.786115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.786269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.786294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.786454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.786612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.786638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.786767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.786928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.786953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.787118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.787239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.787264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.787458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.787586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.787611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.787774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.787932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.787957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.788116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.788276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.788301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.788438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.788562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.788587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.788727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.788899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.788925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.789063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.789223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.789248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.621 [2024-07-10 15:50:43.789389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.789549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.621 [2024-07-10 15:50:43.789575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.621 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.789713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.789856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.789882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.790016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.790151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.790177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.790325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.790462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.790489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.790620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.790755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.790780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.790938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.791066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.791092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.791224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.791350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.791375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.791504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.791695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.791721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.791855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.792028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.792054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.792213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.792360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.792385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.792530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.792658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.792684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.792844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.793006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.793031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.793187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.793340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.793366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.793503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.793687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.793713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.793846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.794009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.794035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.794181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.794337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.794363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.794523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.794661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.794688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.794852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.795010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.795035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.795167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.795291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.795316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.795498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.795657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.795683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.795817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.795944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.795969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.796137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.796270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.796297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.796430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.796568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.796594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.796735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.796867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.796894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.797081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.797266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.797292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.797422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.797596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.797621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.797745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.797907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.797934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.798068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.798213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.798238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.798371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.798532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.798558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.798720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.798849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.798876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.799028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.799185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.799210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.799377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.799509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.799535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.799707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.799838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.799863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.800040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.800168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.800193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.800354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.800483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.800509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.800674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.800801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.800825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.800969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.801101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.801128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.801291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.801416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.622 [2024-07-10 15:50:43.801453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.622 qpair failed and we were unable to recover it. 00:27:04.622 [2024-07-10 15:50:43.801586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.801745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.801772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.801933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.802117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.802143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.802306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.802438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.802464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.802659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.802816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.802841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.802990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.803145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.803170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.803333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.803477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.803503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.803663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.803851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.803876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.804021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.804175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.804200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.804362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.804549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.804575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.804707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.804841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.804866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.805000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.805122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.805147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.805286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.805417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.805448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.805582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.805748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.805773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.805924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.806056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.806083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.806240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.806368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.806393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.806530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.806705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.806731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.806899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.807041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.807066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.807197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.807353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.807378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.807534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.807668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.807694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.807872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.808017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.808043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.808205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.808330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.808355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.808521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.808651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.808676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.808837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.808996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.809021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.809208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.809367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.809393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.809543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.809672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.809698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.809885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.810011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.810035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.810165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.810300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.810325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.810457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.810601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.810627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.810795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.810954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.810979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.811110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.811248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.811275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.811410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.811551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.811577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.811736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.811921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.811946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.812108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.812263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.812289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.623 [2024-07-10 15:50:43.812430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.812568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.623 [2024-07-10 15:50:43.812594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.623 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.812720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.812876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.812901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.813084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.813207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.813233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.813405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.813563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.813589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.813730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.813881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.813906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.814035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.814173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.814199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.814329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.814491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.814518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.814679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.814835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.814860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.814981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.815119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.815146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.815281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.815440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.815467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.815641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.815779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.815805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.815938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.816084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.816109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.816240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.816434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.816460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.816607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.816743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.816769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.816913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.817069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.817095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.817231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.817398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.817436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.817567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.817698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.817723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.817853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.818014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.818039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.818194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.818322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.818348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.818511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.818644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.818669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.818828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.818968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.818998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.819145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.819276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.819301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.819465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.819626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.819651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.819787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.819918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.819943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.820093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.820224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.820251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.820407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.820542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.820567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.820726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.820869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.820895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.821060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.821198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.821224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.821365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.821510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.821536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.821696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.821857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.821882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.822036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.822195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.822226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.822369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.822536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.822562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.822702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.822863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.822889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.823029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.823190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.823216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.823352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.823481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.823508] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.823652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.823776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.823801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.823936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.824065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.824091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.824224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.824415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.824446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.624 qpair failed and we were unable to recover it. 00:27:04.624 [2024-07-10 15:50:43.824577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.624 [2024-07-10 15:50:43.824705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.824730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.824860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.825010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.825035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.825161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.825293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.825322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.825455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.825588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.825614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.825747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.825906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.825932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.826062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.826247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.826273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.826402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.826555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.826580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.826755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.826882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.826909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.827068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.827229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.827254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.827416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.827569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.827595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.827757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.827884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.827911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.828072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.828199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.828225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.828380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.828527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.828559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.828699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.828825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.828851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.829004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.829163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.829189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.829332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.829504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.829532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.829676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.829834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.829859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.830022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.830178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.830203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.830391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.830525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.830551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.830710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.830866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.830891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.831026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.831150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.831176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.831323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.831478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.831505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.831660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.831798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.831825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.831986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.832142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.832168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.832355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.832523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.832549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.832713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.832873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.832899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.833034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.833230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.833256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.833393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.833536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.833562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.833722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.833846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.833871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.834030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.834186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.834212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.834379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.834549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.834575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.834703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.834870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.834896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.835029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.835162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.835188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.835334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.835533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.835559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.835701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.835849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.835876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.625 [2024-07-10 15:50:43.836015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.836202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.625 [2024-07-10 15:50:43.836227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.625 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.836394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.836532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.836558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.836696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.836826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.836853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.837029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.837152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.837177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.837334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.837522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.837548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.837694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.837858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.837885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.838050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.838210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.838235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.838393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.838529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.838556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.838704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.838886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.838911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.839046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.839184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.839210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.839397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.839535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.839561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.839698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.839870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.839895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.840030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.840170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.840195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.840386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.840532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.840558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.840721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.840905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.840930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.841092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.841234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.841259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.841392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.841567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.841594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.841737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.841893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.841918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.842098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.842256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.842281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.842421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.842585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.842610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.842754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.842910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.842935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.843071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.843195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.843221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.843367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.843540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.843566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.843722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.843870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.843896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.844022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.844177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.844203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.844349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.844490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.844515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.844677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.844807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.844834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.844973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.845139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.845164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.845302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.845468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.845494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.845623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.845752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.845777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.845911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.846042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.846070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.626 qpair failed and we were unable to recover it. 00:27:04.626 [2024-07-10 15:50:43.846251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.846380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.626 [2024-07-10 15:50:43.846405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.846542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.846712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.846738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.846879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.847043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.847069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.847234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.847395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.847420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.847599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.847732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.847759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.847886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.848070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.848095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.848283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.848432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.848457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.848636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.848810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.848836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.848978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.849117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.849142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.849268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.849443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.849470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.849628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.849759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.849784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.849956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.850117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.850143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.850284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.850445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.850471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.850605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.850761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.850786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.850946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.851069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.851094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.851255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.851377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.851402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.851579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.851705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.851731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.851858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.851996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.852021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.852146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.852282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.852308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.852471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.852616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.852642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.852828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.852988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.853014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.853164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.853322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.853347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.853490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.853627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.853652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.853819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.853971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.853996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.854134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.854268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.854293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.854434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.854606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.854632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.854792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.854924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.854950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.855087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.855248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.855275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.855400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.855591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.855616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.855754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.855891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.855916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.856051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.856211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.856238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.856406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.856540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.856566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.856699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.856827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.856853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.857011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.857146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.857172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.857318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.857479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.857506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.857639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.857776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.857802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.857933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.858086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.858111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.858296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.858442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.858468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.858624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.858751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.858776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.858945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.859106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.859133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.859263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.859387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.859412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.859549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.859735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.859760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.859898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.860059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.860085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.627 [2024-07-10 15:50:43.860244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.860365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.627 [2024-07-10 15:50:43.860391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.627 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.860563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.860695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.860720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.860906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.861050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.861075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.861246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.861386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.861411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.861558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.861698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.861724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.861866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.862055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.862081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.862222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.862380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.862405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.862577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.862737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.862763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.862919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.863078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.863103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.863231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.863395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.863420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.863556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.863682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.863707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.863844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.863965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.863990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.864119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.864261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.864286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.864431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.864593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.864619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.864751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.864906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.864932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.865061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.865200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.865226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.865389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.865592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.865619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.865748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.865936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.865961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.866092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.866236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.866262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.866422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.866555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.866582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.866732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.866877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.866904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.867065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.867195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.867220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.867348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.867485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.867513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.867675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.867829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.867855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.867994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.868154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.868180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.868309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.868465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.868491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.868618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.868742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.868768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.868922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.869084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.869109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.869286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.869433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.869459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.869615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.869777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.869802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.869938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.870071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.870099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.870225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.870370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.870395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.870561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.870750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.870775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.870940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.871104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.871131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.871272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.871441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.871473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.871636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.871793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.871818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.871986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.872156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.872181] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.872342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.872477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.872502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.872634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.872796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.872820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.872961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.873103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.873129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.873290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.873460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.873487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.873622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.873784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.873809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.873940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.874082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.874107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.874239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.874377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.874404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.874571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.874741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.874776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.628 qpair failed and we were unable to recover it. 00:27:04.628 [2024-07-10 15:50:43.874919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.628 [2024-07-10 15:50:43.875101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.875126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.875256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.875380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.875406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.875548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.875679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.875706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.875867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.876002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.876028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.876195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.876324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.876348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.876487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.876619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.876644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.876785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.876919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.876944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.877106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.877248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.877273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.877430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.877566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.877596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.877755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.877881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.877911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.878069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.878242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.878267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.878433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.878564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.878591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.878761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.878888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.878913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.879041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.879216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.879242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.879398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.879532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.879558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.879722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.879844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.879869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.880026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.880163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.880188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.880318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.880486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.880513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.880675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.880804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.880829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.880988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.881114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.881142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.881280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.881408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.881441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.881572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.881745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.881770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.881928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.882055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.882079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.882231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.882354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.882379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.882541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.882679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.882706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.882866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.883051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.883076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.883203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.883325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.883350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.883479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.883634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.883658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.883801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.883923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.883948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.884106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.884227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.884252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.884383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.884533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.884559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.884707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.884866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.884893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.885031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.885160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.885185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.885343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.885525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.885550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.885675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.885805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.885830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.885991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.886119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.886143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.886300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.886445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.886473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.886607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.886735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.886761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.886924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.887050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.887075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.629 qpair failed and we were unable to recover it. 00:27:04.629 [2024-07-10 15:50:43.887231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.887364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.629 [2024-07-10 15:50:43.887388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.887533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.887712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.887737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.887868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.888005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.888032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.888177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.888306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.888332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.888467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.888597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.888622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.888751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.888886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.888911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.889040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.889203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.889228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.889358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.889492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.889518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.889656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.889829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.889855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.889994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.890129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.890154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.890293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.890463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.890489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.890658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.890813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.890838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.891014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.891171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.891196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.891329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.891470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.891495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.891653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.891842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.891867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.891992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.892163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.892188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.892321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.892499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.892524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.892659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.892789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.892814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.892987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.893113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.893138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.893272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.893416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.893447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.893608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.893733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.893757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.893921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.894076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.894101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.894235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.894361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.894386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.894515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.894680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.894705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.894851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.894985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.895010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.895181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.895341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.895366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.895544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.895683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.895708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.895835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.896001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.896026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.896189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.896322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.896346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.896522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.896653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.896678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.896821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.896950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.896974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.897117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.897289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.897313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.897481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.897636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.897662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.897809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.897972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.897996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.898157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.898281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.898305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.898480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.898638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.898664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.898842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.898995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.899020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.899164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.899320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.899344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.899471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.899614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.899639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.899779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.899965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.899990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.900123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.900245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.900269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.900406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.900542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.900567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.630 qpair failed and we were unable to recover it. 00:27:04.630 [2024-07-10 15:50:43.900711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.630 [2024-07-10 15:50:43.900874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.900899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.901036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.901165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.901191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.901337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.901502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.901529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.901702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.901843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.901870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.901999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.902124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.902149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.902279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.902409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.902440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.902577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.902713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.902738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.902899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.903057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.903083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.903229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.903360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.903387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.903573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.903695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.903720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.903874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.904038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.904063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.904192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.904319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.904343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.904533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.904662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.904687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.904846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.904974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.905001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.905136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.905263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.905288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.905422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.905582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.905607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.905739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.905859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.905884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.906030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.906167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.906193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 15:50:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:04.631 [2024-07-10 15:50:43.906343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 15:50:43 -- common/autotest_common.sh@852 -- # return 0 00:27:04.631 [2024-07-10 15:50:43.906476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.906501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 15:50:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:04.631 [2024-07-10 15:50:43.906633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 15:50:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:04.631 [2024-07-10 15:50:43.906793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 15:50:43 -- common/autotest_common.sh@10 -- # set +x 00:27:04.631 [2024-07-10 15:50:43.906819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.906984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.907117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.907142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.907276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.907405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.907433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.907602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.907725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.907750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.907883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.908041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.908067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.908201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.908361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.908387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.908546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.908685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.908710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.908871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.909027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.909053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.909198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.909360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.909386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.909524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.909699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.909731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.909881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.910043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.910068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.910224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.910357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.910383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.910551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.910696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.910721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.910868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.911011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.911037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.911201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.911356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.911381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.911540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.911670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.911694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.911828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.911996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.912021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.912186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.912321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.912345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.912508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.912635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.912660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.912817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.912991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.913021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.913150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.913279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.913304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.913447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.913574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.913599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.913731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.913875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.913901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.914098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.914250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.914275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.631 qpair failed and we were unable to recover it. 00:27:04.631 [2024-07-10 15:50:43.914422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.631 [2024-07-10 15:50:43.914615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.914640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.914771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.914937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.914962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.915094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.915223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.915250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.915432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.915574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.915599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.915733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.915902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.915928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.916078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.916204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.916234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.916394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.916521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.916548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.916692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.916822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.916847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.917024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.917171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.917196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.917354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.917529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.917555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.917684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.917818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.917852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.917998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.918124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.918149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.918301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.918438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.918464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.918612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.918744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.918769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.918925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.919121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.919146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.919281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.919453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.919479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.919643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.919783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.919817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.919988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.920113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.920138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.920294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.920438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.920468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.920632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.920762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.920795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.920936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.921069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.921096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.921242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.921381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.921407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.921562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.921716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.921752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.921920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.922075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.922099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.922241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.922379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.922404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.922544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.922678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.922703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.922864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.923024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.923049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.923190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.923331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.923357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.923503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.923664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.923690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.923836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 15:50:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.632 [2024-07-10 15:50:43.924003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 15:50:43 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:04.632 [2024-07-10 15:50:43.924029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 15:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:04.632 [2024-07-10 15:50:43.924163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 15:50:43 -- common/autotest_common.sh@10 -- # set +x 00:27:04.632 [2024-07-10 15:50:43.924322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.924347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.924506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.924641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.924666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.924814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.924949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.924976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.925151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.925303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.925328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.925460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.925592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.925619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.925752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.925886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.925910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.926070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.926200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.926224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.926367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.926520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.926545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.926669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.926844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.926868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.927008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.927140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.927165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.927309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.927434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.632 [2024-07-10 15:50:43.927459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.632 qpair failed and we were unable to recover it. 00:27:04.632 [2024-07-10 15:50:43.927584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.927716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.927740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.927868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.928026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.928051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.928187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.928345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.928370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.928525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.928661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.928686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a00000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.928839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.929017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.929046] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.929177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.929321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.929346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.929572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.929732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.929757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.929921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.930057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.930082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.930226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.930356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.930381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.930603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.930764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.930789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.930925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.931056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.931081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.931269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.931437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.931463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.931592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.931723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.931747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.931906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.932072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.932097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.932268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.932409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.932439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.932601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.932757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.932782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.932918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.933046] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.933071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.933229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.933361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.933386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.933532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.933660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.933685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.933887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.934034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.934059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.934190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.934333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.934359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.934533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.934672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.934697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.934844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.935054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.935079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.935234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.935399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.935431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.935568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.935743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.935768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.935911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.936069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.936094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.936239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.936371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.936396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.936542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.936697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.936722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.936873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.937030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.937056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.937230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.937408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.937438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.937675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.937852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.937877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.938058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.938206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.938231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.938397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.938548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.938573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.938715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.938860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.938885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.939082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.939218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.939244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.939376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.939611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.939637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.939816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.939973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.940000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.940134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.940303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.940328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.940494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.940632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.940656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.940802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.940961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.940986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.941132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.941307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.941332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.633 qpair failed and we were unable to recover it. 00:27:04.633 [2024-07-10 15:50:43.941484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.941611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.633 [2024-07-10 15:50:43.941636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.941777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.941933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.941957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.942116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.942249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.942274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.942404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.942569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.942595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.942724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.942886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.942910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.943078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.943218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.943242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.943372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.943550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.943575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.943715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.943866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.943891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.944044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.944205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.944232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.944400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.944568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.944593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.944722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.944878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.944903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.945064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.945195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.945221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.945378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.945545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.945571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.945709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.945892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.945917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.946089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.946252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.946277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.946409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.946565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.946590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.946756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.946913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.946938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.947104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 Malloc0 00:27:04.634 [2024-07-10 15:50:43.947265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.947290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.947445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 15:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:04.634 [2024-07-10 15:50:43.947580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.947606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 15:50:43 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:04.634 [2024-07-10 15:50:43.947750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 15:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:04.634 [2024-07-10 15:50:43.947910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.947935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 15:50:43 -- common/autotest_common.sh@10 -- # set +x 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.948108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.948238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.948263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.948438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.948569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.948594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.948759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.948881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.948911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.949056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.949190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.949215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.949381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.949539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.949565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.949710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.949874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.949899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.950036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.950190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.950215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.950375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.950551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.950578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.950724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.950870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.950895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.950923] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.634 [2024-07-10 15:50:43.951048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.951189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.951213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.951373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.951517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.951543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.951699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.951841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.951865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.952040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.952198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.952227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.952392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.952531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.952556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.952695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.952867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.952891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.953026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.953157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.953184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.953334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.953474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.953502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.953638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.953765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.953790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.953956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.954136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.954172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.954303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.954436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.954461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.954607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.954738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.954763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.954925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.955098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.955123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.634 qpair failed and we were unable to recover it. 00:27:04.634 [2024-07-10 15:50:43.955251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.634 [2024-07-10 15:50:43.955419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.955448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.955589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.955721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.955749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.955904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.956040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.956065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.956228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.956383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.956407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.956559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.956697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.956722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.956859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.956987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.957012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.957176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.957306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.957332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.957536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.957672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.957697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.957859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.957990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.958017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.958182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.958308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.958333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.958460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.958595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.958620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.958759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.958920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.958945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.959081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.959209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 15:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:04.635 [2024-07-10 15:50:43.959235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 15:50:43 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:04.635 [2024-07-10 15:50:43.959365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 15:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:04.635 [2024-07-10 15:50:43.959520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.959546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 15:50:43 -- common/autotest_common.sh@10 -- # set +x 00:27:04.635 [2024-07-10 15:50:43.959678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.959840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.959866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.960008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.960168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.960192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.960378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.960521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.960546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.960675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.960830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.960856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.961050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.961179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.961205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.961342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.961507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.961533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f6a08000b90 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.961696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.961883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.961911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.962062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.962195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.962221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.962367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.962512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.962538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.962676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.962808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.962835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.962967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.963098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.963123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.963258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.963395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.963434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.963573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.963719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.963743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.963878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.964008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.964032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.964166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.964305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.964330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.964476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.964608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.964633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.964774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.964910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.964935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.965083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.965251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.965279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.965454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.965621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.965646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.965796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.965947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.965972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.966126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.966277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.966302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.966488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.966637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.966662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.966834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.966983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.967009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.967142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 15:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:04.635 [2024-07-10 15:50:43.967295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.967322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 15:50:43 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 15:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:04.635 [2024-07-10 15:50:43.967500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 15:50:43 -- common/autotest_common.sh@10 -- # set +x 00:27:04.635 [2024-07-10 15:50:43.967642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.967667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.967821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.967984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.968009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.968157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.968311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.968336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.635 qpair failed and we were unable to recover it. 00:27:04.635 [2024-07-10 15:50:43.968508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.635 [2024-07-10 15:50:43.968631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.636 [2024-07-10 15:50:43.968655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.636 qpair failed and we were unable to recover it. 00:27:04.636 [2024-07-10 15:50:43.968818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.636 [2024-07-10 15:50:43.968961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.636 [2024-07-10 15:50:43.968994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d599f0 with addr=10.0.0.2, port=4420 00:27:04.636 qpair failed and we were unable to recover it. 00:27:04.636 [2024-07-10 15:50:43.969172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.636 [2024-07-10 15:50:43.969320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.636 [2024-07-10 15:50:43.969348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.636 qpair failed and we were unable to recover it. 00:27:04.893 [2024-07-10 15:50:43.969505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.969647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.969674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-07-10 15:50:43.969846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.969985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.970011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-07-10 15:50:43.970174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.970304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.970329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-07-10 15:50:43.970491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.970630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.970655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-07-10 15:50:43.970789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.970944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.970970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-07-10 15:50:43.971129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.971309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.971343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-07-10 15:50:43.971504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.971659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.971685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-07-10 15:50:43.971830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.971996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.972021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-07-10 15:50:43.972169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.972327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.972360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-07-10 15:50:43.972509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.972650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.972676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-07-10 15:50:43.972807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.972949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.972975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-07-10 15:50:43.973120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.973313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.973338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-07-10 15:50:43.973491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.973628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.973654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-07-10 15:50:43.973825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.973984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.974009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-07-10 15:50:43.974144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.974274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.974307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-07-10 15:50:43.974448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.974605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.974630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.893 qpair failed and we were unable to recover it. 00:27:04.893 [2024-07-10 15:50:43.974774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.893 [2024-07-10 15:50:43.974911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.974937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:43.975084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.975222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.975248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 15:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:04.894 [2024-07-10 15:50:43.975386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 15:50:43 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:04.894 [2024-07-10 15:50:43.975542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 15:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:04.894 [2024-07-10 15:50:43.975568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 15:50:43 -- common/autotest_common.sh@10 -- # set +x 00:27:04.894 [2024-07-10 15:50:43.975703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.975845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.975870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:43.976061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.976210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.976235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:43.976398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.976532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.976557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:43.976692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.976831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.976864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:43.977029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.977153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.977177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:43.977319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.977471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.977496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:43.977649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.977843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.977868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:43.978034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.978189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.978215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:43.978378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.978525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.978552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:43.978693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.978859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.978885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f69f8000b90 with addr=10.0.0.2, port=4420 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:43.979073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.894 [2024-07-10 15:50:43.979152] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.894 [2024-07-10 15:50:43.982324] posix.c: 670:posix_sock_psk_use_session_client_cb: *ERROR*: PSK is not set 00:27:04.894 [2024-07-10 15:50:43.982397] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7f69f8000b90 (107): Transport endpoint is not connected 00:27:04.894 [2024-07-10 15:50:43.982505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 15:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:04.894 15:50:43 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:04.894 15:50:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:04.894 15:50:43 -- common/autotest_common.sh@10 -- # set +x 00:27:04.894 15:50:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:04.894 15:50:43 -- host/target_disconnect.sh@58 -- # wait 2234248 00:27:04.894 [2024-07-10 15:50:43.991598] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:43.991757] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:43.991789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:43.991804] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:43.991818] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:43.991848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.001537] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.001680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.001707] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.001738] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.001752] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.001783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.011532] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.011674] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.011700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.011715] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.011728] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.011758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.021518] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.021659] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.021685] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.021700] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.021713] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.021747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.031563] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.031708] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.031737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.031751] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.031765] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.031794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.041588] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.041721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.041747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.041761] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.041773] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.041805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.051573] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.051712] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.051743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.051757] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.051770] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.051799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.061642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.061778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.061804] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.061818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.061830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.061860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.071660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.071796] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.071822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.071836] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.071849] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.071879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.081684] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.081821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.081847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.081861] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.081874] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.081904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.091695] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.091835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.091861] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.091882] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.091896] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.091926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.101746] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.101885] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.101910] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.101925] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.101938] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.101968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.111779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.111932] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.111958] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.111972] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.111985] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.112015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.121822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.121963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.121989] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.122003] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.122016] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.122046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.131825] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.131964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.131990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.132004] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.132016] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.132045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.141918] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.142076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.142104] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.142120] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.142133] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.142163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.151912] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.152047] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.152074] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.152091] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.152105] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.152135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.161916] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.162055] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.162081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.162096] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.162109] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.162139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.172045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.172207] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.172233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.172247] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.172260] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.172289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.182018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.182154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.182184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.182200] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.182213] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.182242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.192044] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.192232] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.192257] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.192272] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.192285] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.192314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.202130] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.202318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.202344] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.202359] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.202372] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.202403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.212046] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.894 [2024-07-10 15:50:44.212185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.894 [2024-07-10 15:50:44.212212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.894 [2024-07-10 15:50:44.212226] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.894 [2024-07-10 15:50:44.212239] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.894 [2024-07-10 15:50:44.212269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.894 qpair failed and we were unable to recover it. 00:27:04.894 [2024-07-10 15:50:44.222151] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.895 [2024-07-10 15:50:44.222290] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.895 [2024-07-10 15:50:44.222316] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.895 [2024-07-10 15:50:44.222331] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.895 [2024-07-10 15:50:44.222344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.895 [2024-07-10 15:50:44.222379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-07-10 15:50:44.232126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.895 [2024-07-10 15:50:44.232265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.895 [2024-07-10 15:50:44.232291] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.895 [2024-07-10 15:50:44.232306] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.895 [2024-07-10 15:50:44.232319] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.895 [2024-07-10 15:50:44.232348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-07-10 15:50:44.242156] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.895 [2024-07-10 15:50:44.242295] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.895 [2024-07-10 15:50:44.242321] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.895 [2024-07-10 15:50:44.242335] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.895 [2024-07-10 15:50:44.242348] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.895 [2024-07-10 15:50:44.242378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-07-10 15:50:44.252155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.895 [2024-07-10 15:50:44.252293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.895 [2024-07-10 15:50:44.252318] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.895 [2024-07-10 15:50:44.252333] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.895 [2024-07-10 15:50:44.252345] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.895 [2024-07-10 15:50:44.252375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.895 qpair failed and we were unable to recover it. 00:27:04.895 [2024-07-10 15:50:44.262240] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:04.895 [2024-07-10 15:50:44.262373] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:04.895 [2024-07-10 15:50:44.262399] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:04.895 [2024-07-10 15:50:44.262414] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:04.895 [2024-07-10 15:50:44.262434] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:04.895 [2024-07-10 15:50:44.262466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:04.895 qpair failed and we were unable to recover it. 00:27:05.152 [2024-07-10 15:50:44.272254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.152 [2024-07-10 15:50:44.272394] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.152 [2024-07-10 15:50:44.272435] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.152 [2024-07-10 15:50:44.272453] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.152 [2024-07-10 15:50:44.272466] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.152 [2024-07-10 15:50:44.272498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.152 qpair failed and we were unable to recover it. 00:27:05.152 [2024-07-10 15:50:44.282258] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.152 [2024-07-10 15:50:44.282395] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.152 [2024-07-10 15:50:44.282421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.152 [2024-07-10 15:50:44.282444] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.152 [2024-07-10 15:50:44.282458] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.152 [2024-07-10 15:50:44.282488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.152 qpair failed and we were unable to recover it. 00:27:05.152 [2024-07-10 15:50:44.292307] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.152 [2024-07-10 15:50:44.292452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.152 [2024-07-10 15:50:44.292477] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.152 [2024-07-10 15:50:44.292492] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.152 [2024-07-10 15:50:44.292505] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.152 [2024-07-10 15:50:44.292535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.152 qpair failed and we were unable to recover it. 00:27:05.152 [2024-07-10 15:50:44.302318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.152 [2024-07-10 15:50:44.302468] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.152 [2024-07-10 15:50:44.302493] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.152 [2024-07-10 15:50:44.302507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.152 [2024-07-10 15:50:44.302521] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.152 [2024-07-10 15:50:44.302550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.152 qpair failed and we were unable to recover it. 00:27:05.152 [2024-07-10 15:50:44.312375] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.152 [2024-07-10 15:50:44.312517] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.152 [2024-07-10 15:50:44.312542] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.152 [2024-07-10 15:50:44.312556] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.152 [2024-07-10 15:50:44.312569] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.152 [2024-07-10 15:50:44.312604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.152 qpair failed and we were unable to recover it. 00:27:05.152 [2024-07-10 15:50:44.322384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.152 [2024-07-10 15:50:44.322525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.152 [2024-07-10 15:50:44.322551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.152 [2024-07-10 15:50:44.322566] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.152 [2024-07-10 15:50:44.322578] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.152 [2024-07-10 15:50:44.322609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.152 qpair failed and we were unable to recover it. 00:27:05.152 [2024-07-10 15:50:44.332403] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.152 [2024-07-10 15:50:44.332552] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.152 [2024-07-10 15:50:44.332577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.152 [2024-07-10 15:50:44.332591] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.152 [2024-07-10 15:50:44.332604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.152 [2024-07-10 15:50:44.332634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.152 qpair failed and we were unable to recover it. 00:27:05.152 [2024-07-10 15:50:44.342471] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.152 [2024-07-10 15:50:44.342631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.152 [2024-07-10 15:50:44.342659] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.152 [2024-07-10 15:50:44.342674] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.152 [2024-07-10 15:50:44.342690] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.152 [2024-07-10 15:50:44.342722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.153 qpair failed and we were unable to recover it. 00:27:05.153 [2024-07-10 15:50:44.352447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.153 [2024-07-10 15:50:44.352586] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.153 [2024-07-10 15:50:44.352612] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.153 [2024-07-10 15:50:44.352626] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.153 [2024-07-10 15:50:44.352639] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.153 [2024-07-10 15:50:44.352669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.153 qpair failed and we were unable to recover it. 00:27:05.153 [2024-07-10 15:50:44.362490] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.153 [2024-07-10 15:50:44.362625] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.153 [2024-07-10 15:50:44.362656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.153 [2024-07-10 15:50:44.362672] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.153 [2024-07-10 15:50:44.362685] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.153 [2024-07-10 15:50:44.362715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.153 qpair failed and we were unable to recover it. 00:27:05.153 [2024-07-10 15:50:44.372517] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.153 [2024-07-10 15:50:44.372657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.153 [2024-07-10 15:50:44.372683] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.153 [2024-07-10 15:50:44.372697] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.153 [2024-07-10 15:50:44.372710] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.153 [2024-07-10 15:50:44.372740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.153 qpair failed and we were unable to recover it. 00:27:05.153 [2024-07-10 15:50:44.382511] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.153 [2024-07-10 15:50:44.382646] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.153 [2024-07-10 15:50:44.382672] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.153 [2024-07-10 15:50:44.382686] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.153 [2024-07-10 15:50:44.382698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.153 [2024-07-10 15:50:44.382727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.153 qpair failed and we were unable to recover it. 00:27:05.153 [2024-07-10 15:50:44.392585] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.153 [2024-07-10 15:50:44.392764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.153 [2024-07-10 15:50:44.392789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.153 [2024-07-10 15:50:44.392804] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.153 [2024-07-10 15:50:44.392817] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.153 [2024-07-10 15:50:44.392846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.153 qpair failed and we were unable to recover it. 00:27:05.153 [2024-07-10 15:50:44.402576] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.153 [2024-07-10 15:50:44.402713] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.153 [2024-07-10 15:50:44.402739] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.153 [2024-07-10 15:50:44.402753] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.153 [2024-07-10 15:50:44.402771] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.153 [2024-07-10 15:50:44.402802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.153 qpair failed and we were unable to recover it. 00:27:05.153 [2024-07-10 15:50:44.412654] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.153 [2024-07-10 15:50:44.412790] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.153 [2024-07-10 15:50:44.412818] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.153 [2024-07-10 15:50:44.412833] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.153 [2024-07-10 15:50:44.412846] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.153 [2024-07-10 15:50:44.412875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.153 qpair failed and we were unable to recover it. 00:27:05.153 [2024-07-10 15:50:44.422657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.153 [2024-07-10 15:50:44.422804] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.153 [2024-07-10 15:50:44.422830] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.153 [2024-07-10 15:50:44.422845] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.153 [2024-07-10 15:50:44.422858] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.153 [2024-07-10 15:50:44.422889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.153 qpair failed and we were unable to recover it. 00:27:05.153 [2024-07-10 15:50:44.432672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.153 [2024-07-10 15:50:44.432809] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.153 [2024-07-10 15:50:44.432835] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.153 [2024-07-10 15:50:44.432850] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.153 [2024-07-10 15:50:44.432862] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.153 [2024-07-10 15:50:44.432893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.153 qpair failed and we were unable to recover it. 00:27:05.153 [2024-07-10 15:50:44.442697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.153 [2024-07-10 15:50:44.442830] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.153 [2024-07-10 15:50:44.442856] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.153 [2024-07-10 15:50:44.442871] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.153 [2024-07-10 15:50:44.442884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.153 [2024-07-10 15:50:44.442913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.153 qpair failed and we were unable to recover it. 00:27:05.153 [2024-07-10 15:50:44.452742] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.153 [2024-07-10 15:50:44.452890] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.153 [2024-07-10 15:50:44.452916] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.153 [2024-07-10 15:50:44.452930] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.153 [2024-07-10 15:50:44.452944] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.153 [2024-07-10 15:50:44.452973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.153 qpair failed and we were unable to recover it. 00:27:05.153 [2024-07-10 15:50:44.462753] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.153 [2024-07-10 15:50:44.462903] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.153 [2024-07-10 15:50:44.462929] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.153 [2024-07-10 15:50:44.462944] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.153 [2024-07-10 15:50:44.462958] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.153 [2024-07-10 15:50:44.462987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.153 qpair failed and we were unable to recover it. 00:27:05.153 [2024-07-10 15:50:44.472801] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.153 [2024-07-10 15:50:44.472961] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.153 [2024-07-10 15:50:44.472986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.153 [2024-07-10 15:50:44.473001] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.153 [2024-07-10 15:50:44.473013] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.153 [2024-07-10 15:50:44.473043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.153 qpair failed and we were unable to recover it. 00:27:05.153 [2024-07-10 15:50:44.482816] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.153 [2024-07-10 15:50:44.482949] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.153 [2024-07-10 15:50:44.482975] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.153 [2024-07-10 15:50:44.482990] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.153 [2024-07-10 15:50:44.483003] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.153 [2024-07-10 15:50:44.483031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.153 qpair failed and we were unable to recover it. 00:27:05.153 [2024-07-10 15:50:44.492853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.154 [2024-07-10 15:50:44.493003] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.154 [2024-07-10 15:50:44.493028] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.154 [2024-07-10 15:50:44.493043] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.154 [2024-07-10 15:50:44.493061] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.154 [2024-07-10 15:50:44.493092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.154 qpair failed and we were unable to recover it. 00:27:05.154 [2024-07-10 15:50:44.502915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.154 [2024-07-10 15:50:44.503050] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.154 [2024-07-10 15:50:44.503076] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.154 [2024-07-10 15:50:44.503090] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.154 [2024-07-10 15:50:44.503103] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.154 [2024-07-10 15:50:44.503132] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.154 qpair failed and we were unable to recover it. 00:27:05.154 [2024-07-10 15:50:44.512969] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.154 [2024-07-10 15:50:44.513139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.154 [2024-07-10 15:50:44.513165] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.154 [2024-07-10 15:50:44.513179] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.154 [2024-07-10 15:50:44.513192] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.154 [2024-07-10 15:50:44.513221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.154 qpair failed and we were unable to recover it. 00:27:05.154 [2024-07-10 15:50:44.522937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.154 [2024-07-10 15:50:44.523075] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.154 [2024-07-10 15:50:44.523100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.154 [2024-07-10 15:50:44.523115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.154 [2024-07-10 15:50:44.523127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.154 [2024-07-10 15:50:44.523157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.154 qpair failed and we were unable to recover it. 00:27:05.411 [2024-07-10 15:50:44.532981] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.411 [2024-07-10 15:50:44.533120] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.411 [2024-07-10 15:50:44.533145] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.411 [2024-07-10 15:50:44.533160] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.411 [2024-07-10 15:50:44.533172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.411 [2024-07-10 15:50:44.533203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.411 qpair failed and we were unable to recover it. 00:27:05.411 [2024-07-10 15:50:44.543051] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.411 [2024-07-10 15:50:44.543213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.411 [2024-07-10 15:50:44.543239] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.411 [2024-07-10 15:50:44.543253] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.411 [2024-07-10 15:50:44.543266] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.411 [2024-07-10 15:50:44.543295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.411 qpair failed and we were unable to recover it. 00:27:05.411 [2024-07-10 15:50:44.553049] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.411 [2024-07-10 15:50:44.553200] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.411 [2024-07-10 15:50:44.553226] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.411 [2024-07-10 15:50:44.553241] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.411 [2024-07-10 15:50:44.553253] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.411 [2024-07-10 15:50:44.553282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.411 qpair failed and we were unable to recover it. 00:27:05.411 [2024-07-10 15:50:44.563069] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.411 [2024-07-10 15:50:44.563215] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.411 [2024-07-10 15:50:44.563243] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.411 [2024-07-10 15:50:44.563257] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.411 [2024-07-10 15:50:44.563273] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.411 [2024-07-10 15:50:44.563304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.411 qpair failed and we were unable to recover it. 00:27:05.411 [2024-07-10 15:50:44.573090] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.411 [2024-07-10 15:50:44.573229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.411 [2024-07-10 15:50:44.573254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.411 [2024-07-10 15:50:44.573269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.411 [2024-07-10 15:50:44.573281] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.411 [2024-07-10 15:50:44.573312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.411 qpair failed and we were unable to recover it. 00:27:05.411 [2024-07-10 15:50:44.583138] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.411 [2024-07-10 15:50:44.583299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.411 [2024-07-10 15:50:44.583328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.411 [2024-07-10 15:50:44.583350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.411 [2024-07-10 15:50:44.583364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.411 [2024-07-10 15:50:44.583396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.411 qpair failed and we were unable to recover it. 00:27:05.411 [2024-07-10 15:50:44.593187] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.412 [2024-07-10 15:50:44.593353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.412 [2024-07-10 15:50:44.593380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.412 [2024-07-10 15:50:44.593394] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.412 [2024-07-10 15:50:44.593408] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.412 [2024-07-10 15:50:44.593447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.412 qpair failed and we were unable to recover it. 00:27:05.412 [2024-07-10 15:50:44.603164] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.412 [2024-07-10 15:50:44.603294] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.412 [2024-07-10 15:50:44.603320] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.412 [2024-07-10 15:50:44.603334] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.412 [2024-07-10 15:50:44.603347] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.412 [2024-07-10 15:50:44.603376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.412 qpair failed and we were unable to recover it. 00:27:05.412 [2024-07-10 15:50:44.613228] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.412 [2024-07-10 15:50:44.613382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.412 [2024-07-10 15:50:44.613407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.412 [2024-07-10 15:50:44.613422] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.412 [2024-07-10 15:50:44.613444] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.412 [2024-07-10 15:50:44.613475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.412 qpair failed and we were unable to recover it. 00:27:05.412 [2024-07-10 15:50:44.623239] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.412 [2024-07-10 15:50:44.623377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.412 [2024-07-10 15:50:44.623402] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.412 [2024-07-10 15:50:44.623417] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.412 [2024-07-10 15:50:44.623438] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.412 [2024-07-10 15:50:44.623468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.412 qpair failed and we were unable to recover it. 00:27:05.412 [2024-07-10 15:50:44.633262] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.412 [2024-07-10 15:50:44.633451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.412 [2024-07-10 15:50:44.633478] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.412 [2024-07-10 15:50:44.633492] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.412 [2024-07-10 15:50:44.633506] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.412 [2024-07-10 15:50:44.633535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.412 qpair failed and we were unable to recover it. 00:27:05.412 [2024-07-10 15:50:44.643299] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.412 [2024-07-10 15:50:44.643443] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.412 [2024-07-10 15:50:44.643469] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.412 [2024-07-10 15:50:44.643483] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.412 [2024-07-10 15:50:44.643496] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.412 [2024-07-10 15:50:44.643526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.412 qpair failed and we were unable to recover it. 00:27:05.412 [2024-07-10 15:50:44.653326] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.412 [2024-07-10 15:50:44.653474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.412 [2024-07-10 15:50:44.653499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.412 [2024-07-10 15:50:44.653514] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.412 [2024-07-10 15:50:44.653526] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.412 [2024-07-10 15:50:44.653556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.412 qpair failed and we were unable to recover it. 00:27:05.412 [2024-07-10 15:50:44.663357] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.412 [2024-07-10 15:50:44.663488] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.412 [2024-07-10 15:50:44.663513] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.412 [2024-07-10 15:50:44.663528] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.412 [2024-07-10 15:50:44.663541] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.412 [2024-07-10 15:50:44.663569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.412 qpair failed and we were unable to recover it. 00:27:05.412 [2024-07-10 15:50:44.673401] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.412 [2024-07-10 15:50:44.673539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.412 [2024-07-10 15:50:44.673569] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.412 [2024-07-10 15:50:44.673584] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.412 [2024-07-10 15:50:44.673597] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.412 [2024-07-10 15:50:44.673628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.412 qpair failed and we were unable to recover it. 00:27:05.412 [2024-07-10 15:50:44.683413] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.412 [2024-07-10 15:50:44.683548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.412 [2024-07-10 15:50:44.683574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.412 [2024-07-10 15:50:44.683588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.412 [2024-07-10 15:50:44.683600] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.412 [2024-07-10 15:50:44.683630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.412 qpair failed and we were unable to recover it. 00:27:05.412 [2024-07-10 15:50:44.693502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.412 [2024-07-10 15:50:44.693655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.412 [2024-07-10 15:50:44.693680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.412 [2024-07-10 15:50:44.693694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.412 [2024-07-10 15:50:44.693707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.412 [2024-07-10 15:50:44.693736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.412 qpair failed and we were unable to recover it. 00:27:05.412 [2024-07-10 15:50:44.703496] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.412 [2024-07-10 15:50:44.703642] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.412 [2024-07-10 15:50:44.703669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.412 [2024-07-10 15:50:44.703683] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.412 [2024-07-10 15:50:44.703696] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.412 [2024-07-10 15:50:44.703726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.412 qpair failed and we were unable to recover it. 00:27:05.412 [2024-07-10 15:50:44.713494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.412 [2024-07-10 15:50:44.713631] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.412 [2024-07-10 15:50:44.713656] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.412 [2024-07-10 15:50:44.713670] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.412 [2024-07-10 15:50:44.713683] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.412 [2024-07-10 15:50:44.713713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.412 qpair failed and we were unable to recover it. 00:27:05.412 [2024-07-10 15:50:44.723566] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.412 [2024-07-10 15:50:44.723721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.412 [2024-07-10 15:50:44.723746] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.412 [2024-07-10 15:50:44.723760] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.412 [2024-07-10 15:50:44.723773] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.412 [2024-07-10 15:50:44.723803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.412 qpair failed and we were unable to recover it. 00:27:05.412 [2024-07-10 15:50:44.733569] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.412 [2024-07-10 15:50:44.733711] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.412 [2024-07-10 15:50:44.733737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.412 [2024-07-10 15:50:44.733751] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.412 [2024-07-10 15:50:44.733764] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.412 [2024-07-10 15:50:44.733794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.412 qpair failed and we were unable to recover it. 00:27:05.412 [2024-07-10 15:50:44.743642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.412 [2024-07-10 15:50:44.743776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.412 [2024-07-10 15:50:44.743801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.412 [2024-07-10 15:50:44.743815] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.412 [2024-07-10 15:50:44.743828] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.412 [2024-07-10 15:50:44.743857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.412 qpair failed and we were unable to recover it. 00:27:05.412 [2024-07-10 15:50:44.753642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.412 [2024-07-10 15:50:44.753778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.412 [2024-07-10 15:50:44.753803] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.412 [2024-07-10 15:50:44.753818] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.412 [2024-07-10 15:50:44.753830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.412 [2024-07-10 15:50:44.753859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.412 qpair failed and we were unable to recover it. 00:27:05.412 [2024-07-10 15:50:44.763649] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.412 [2024-07-10 15:50:44.763785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.412 [2024-07-10 15:50:44.763815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.412 [2024-07-10 15:50:44.763829] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.412 [2024-07-10 15:50:44.763842] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.412 [2024-07-10 15:50:44.763871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.412 qpair failed and we were unable to recover it. 00:27:05.412 [2024-07-10 15:50:44.773685] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.412 [2024-07-10 15:50:44.773839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.412 [2024-07-10 15:50:44.773866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.412 [2024-07-10 15:50:44.773880] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.412 [2024-07-10 15:50:44.773893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.413 [2024-07-10 15:50:44.773923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.413 qpair failed and we were unable to recover it. 00:27:05.413 [2024-07-10 15:50:44.783752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.413 [2024-07-10 15:50:44.783908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.413 [2024-07-10 15:50:44.783934] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.413 [2024-07-10 15:50:44.783948] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.413 [2024-07-10 15:50:44.783961] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.413 [2024-07-10 15:50:44.783990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.413 qpair failed and we were unable to recover it. 00:27:05.670 [2024-07-10 15:50:44.793739] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.670 [2024-07-10 15:50:44.793879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.670 [2024-07-10 15:50:44.793905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.670 [2024-07-10 15:50:44.793920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.670 [2024-07-10 15:50:44.793934] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.670 [2024-07-10 15:50:44.793964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.670 qpair failed and we were unable to recover it. 00:27:05.670 [2024-07-10 15:50:44.803795] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.671 [2024-07-10 15:50:44.803924] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.671 [2024-07-10 15:50:44.803950] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.671 [2024-07-10 15:50:44.803964] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.671 [2024-07-10 15:50:44.803977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.671 [2024-07-10 15:50:44.804012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-07-10 15:50:44.813800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.671 [2024-07-10 15:50:44.813944] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.671 [2024-07-10 15:50:44.813969] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.671 [2024-07-10 15:50:44.813984] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.671 [2024-07-10 15:50:44.813996] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.671 [2024-07-10 15:50:44.814025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-07-10 15:50:44.823808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.671 [2024-07-10 15:50:44.823941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.671 [2024-07-10 15:50:44.823966] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.671 [2024-07-10 15:50:44.823980] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.671 [2024-07-10 15:50:44.823993] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.671 [2024-07-10 15:50:44.824021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-07-10 15:50:44.833912] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.671 [2024-07-10 15:50:44.834051] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.671 [2024-07-10 15:50:44.834077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.671 [2024-07-10 15:50:44.834092] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.671 [2024-07-10 15:50:44.834104] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.671 [2024-07-10 15:50:44.834134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-07-10 15:50:44.843897] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.671 [2024-07-10 15:50:44.844029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.671 [2024-07-10 15:50:44.844054] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.671 [2024-07-10 15:50:44.844069] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.671 [2024-07-10 15:50:44.844081] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.671 [2024-07-10 15:50:44.844110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-07-10 15:50:44.853910] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.671 [2024-07-10 15:50:44.854066] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.671 [2024-07-10 15:50:44.854100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.671 [2024-07-10 15:50:44.854118] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.671 [2024-07-10 15:50:44.854131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.671 [2024-07-10 15:50:44.854162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-07-10 15:50:44.863966] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.671 [2024-07-10 15:50:44.864097] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.671 [2024-07-10 15:50:44.864123] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.671 [2024-07-10 15:50:44.864138] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.671 [2024-07-10 15:50:44.864151] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.671 [2024-07-10 15:50:44.864180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-07-10 15:50:44.873961] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.671 [2024-07-10 15:50:44.874132] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.671 [2024-07-10 15:50:44.874158] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.671 [2024-07-10 15:50:44.874172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.671 [2024-07-10 15:50:44.874185] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.671 [2024-07-10 15:50:44.874214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-07-10 15:50:44.884015] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.671 [2024-07-10 15:50:44.884145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.671 [2024-07-10 15:50:44.884171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.671 [2024-07-10 15:50:44.884185] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.671 [2024-07-10 15:50:44.884198] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.671 [2024-07-10 15:50:44.884227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-07-10 15:50:44.894033] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.671 [2024-07-10 15:50:44.894174] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.671 [2024-07-10 15:50:44.894199] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.671 [2024-07-10 15:50:44.894214] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.671 [2024-07-10 15:50:44.894232] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.671 [2024-07-10 15:50:44.894277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-07-10 15:50:44.904079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.671 [2024-07-10 15:50:44.904262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.671 [2024-07-10 15:50:44.904287] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.671 [2024-07-10 15:50:44.904302] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.671 [2024-07-10 15:50:44.904314] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.671 [2024-07-10 15:50:44.904343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-07-10 15:50:44.914091] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.671 [2024-07-10 15:50:44.914220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.671 [2024-07-10 15:50:44.914245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.671 [2024-07-10 15:50:44.914260] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.671 [2024-07-10 15:50:44.914272] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.671 [2024-07-10 15:50:44.914301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-07-10 15:50:44.924115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.671 [2024-07-10 15:50:44.924262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.671 [2024-07-10 15:50:44.924288] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.671 [2024-07-10 15:50:44.924302] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.671 [2024-07-10 15:50:44.924315] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.671 [2024-07-10 15:50:44.924344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-07-10 15:50:44.934146] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.671 [2024-07-10 15:50:44.934291] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.671 [2024-07-10 15:50:44.934317] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.671 [2024-07-10 15:50:44.934331] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.671 [2024-07-10 15:50:44.934344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.671 [2024-07-10 15:50:44.934391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.671 qpair failed and we were unable to recover it. 00:27:05.671 [2024-07-10 15:50:44.944166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.671 [2024-07-10 15:50:44.944303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.671 [2024-07-10 15:50:44.944329] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.671 [2024-07-10 15:50:44.944344] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.672 [2024-07-10 15:50:44.944357] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.672 [2024-07-10 15:50:44.944397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-07-10 15:50:44.954248] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.672 [2024-07-10 15:50:44.954396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.672 [2024-07-10 15:50:44.954421] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.672 [2024-07-10 15:50:44.954444] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.672 [2024-07-10 15:50:44.954457] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.672 [2024-07-10 15:50:44.954497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-07-10 15:50:44.964232] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.672 [2024-07-10 15:50:44.964378] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.672 [2024-07-10 15:50:44.964403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.672 [2024-07-10 15:50:44.964418] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.672 [2024-07-10 15:50:44.964438] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.672 [2024-07-10 15:50:44.964482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-07-10 15:50:44.974294] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.672 [2024-07-10 15:50:44.974446] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.672 [2024-07-10 15:50:44.974472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.672 [2024-07-10 15:50:44.974487] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.672 [2024-07-10 15:50:44.974499] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.672 [2024-07-10 15:50:44.974529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-07-10 15:50:44.984277] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.672 [2024-07-10 15:50:44.984465] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.672 [2024-07-10 15:50:44.984492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.672 [2024-07-10 15:50:44.984507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.672 [2024-07-10 15:50:44.984525] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.672 [2024-07-10 15:50:44.984557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-07-10 15:50:44.994298] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.672 [2024-07-10 15:50:44.994452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.672 [2024-07-10 15:50:44.994478] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.672 [2024-07-10 15:50:44.994492] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.672 [2024-07-10 15:50:44.994504] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.672 [2024-07-10 15:50:44.994534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-07-10 15:50:45.004374] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.672 [2024-07-10 15:50:45.004505] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.672 [2024-07-10 15:50:45.004530] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.672 [2024-07-10 15:50:45.004545] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.672 [2024-07-10 15:50:45.004557] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.672 [2024-07-10 15:50:45.004586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-07-10 15:50:45.014385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.672 [2024-07-10 15:50:45.014525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.672 [2024-07-10 15:50:45.014550] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.672 [2024-07-10 15:50:45.014565] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.672 [2024-07-10 15:50:45.014577] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.672 [2024-07-10 15:50:45.014619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-07-10 15:50:45.024411] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:05.672 [2024-07-10 15:50:45.024553] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:05.672 [2024-07-10 15:50:45.024579] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:05.672 [2024-07-10 15:50:45.024594] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:05.672 [2024-07-10 15:50:45.024606] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f69f8000b90 00:27:05.672 [2024-07-10 15:50:45.024648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:05.672 qpair failed and we were unable to recover it. 00:27:05.672 [2024-07-10 15:50:45.024687] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:05.672 A controller has encountered a failure and is being reset. 00:27:05.930 Controller properly reset. 00:27:10.106 Initializing NVMe Controllers 00:27:10.106 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:10.106 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:10.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:10.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:10.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:10.106 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:10.106 Initialization complete. Launching workers. 00:27:10.106 Starting thread on core 1 00:27:10.106 Starting thread on core 2 00:27:10.106 Starting thread on core 3 00:27:10.106 Starting thread on core 0 00:27:10.106 15:50:49 -- host/target_disconnect.sh@59 -- # sync 00:27:10.106 00:27:10.106 real 0m11.283s 00:27:10.106 user 0m30.833s 00:27:10.106 sys 0m6.688s 00:27:10.106 15:50:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:10.106 15:50:49 -- common/autotest_common.sh@10 -- # set +x 00:27:10.106 ************************************ 00:27:10.106 END TEST nvmf_target_disconnect_tc2 00:27:10.106 ************************************ 00:27:10.106 15:50:49 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:27:10.106 15:50:49 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:27:10.106 15:50:49 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:27:10.106 15:50:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:10.106 15:50:49 -- nvmf/common.sh@116 -- # sync 00:27:10.106 15:50:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:10.106 15:50:49 -- nvmf/common.sh@119 -- # set +e 00:27:10.106 15:50:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:10.106 15:50:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:10.106 rmmod nvme_tcp 00:27:10.106 rmmod nvme_fabrics 00:27:10.106 rmmod nvme_keyring 00:27:10.106 15:50:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:10.106 15:50:49 -- nvmf/common.sh@123 -- # set -e 00:27:10.106 15:50:49 -- nvmf/common.sh@124 -- # return 0 00:27:10.106 15:50:49 -- nvmf/common.sh@477 -- # '[' -n 2234751 ']' 00:27:10.106 15:50:49 -- nvmf/common.sh@478 -- # killprocess 2234751 00:27:10.106 15:50:49 -- common/autotest_common.sh@926 -- # '[' -z 2234751 ']' 00:27:10.106 15:50:49 -- common/autotest_common.sh@930 -- # kill -0 2234751 00:27:10.106 15:50:49 -- common/autotest_common.sh@931 -- # uname 00:27:10.106 15:50:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:10.106 15:50:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2234751 00:27:10.106 15:50:49 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:27:10.106 15:50:49 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:27:10.106 15:50:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2234751' 00:27:10.106 killing process with pid 2234751 00:27:10.106 15:50:49 -- common/autotest_common.sh@945 -- # kill 2234751 00:27:10.106 15:50:49 -- common/autotest_common.sh@950 -- # wait 2234751 00:27:10.106 15:50:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:10.106 15:50:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:10.106 15:50:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:10.364 15:50:49 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:10.364 15:50:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:10.364 15:50:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:10.364 15:50:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:10.364 15:50:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:12.266 15:50:51 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:12.266 00:27:12.266 real 0m15.928s 00:27:12.266 user 0m55.781s 00:27:12.266 sys 0m9.029s 00:27:12.266 15:50:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.266 15:50:51 -- common/autotest_common.sh@10 -- # set +x 00:27:12.266 ************************************ 00:27:12.266 END TEST nvmf_target_disconnect 00:27:12.266 ************************************ 00:27:12.266 15:50:51 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:27:12.266 15:50:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:12.266 15:50:51 -- common/autotest_common.sh@10 -- # set +x 00:27:12.266 15:50:51 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:27:12.266 00:27:12.266 real 21m0.483s 00:27:12.266 user 60m22.422s 00:27:12.266 sys 5m14.664s 00:27:12.266 15:50:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:12.266 15:50:51 -- common/autotest_common.sh@10 -- # set +x 00:27:12.266 ************************************ 00:27:12.266 END TEST nvmf_tcp 00:27:12.266 ************************************ 00:27:12.266 15:50:51 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:27:12.266 15:50:51 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:12.266 15:50:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:12.266 15:50:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:12.266 15:50:51 -- common/autotest_common.sh@10 -- # set +x 00:27:12.266 ************************************ 00:27:12.266 START TEST spdkcli_nvmf_tcp 00:27:12.266 ************************************ 00:27:12.266 15:50:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:12.266 * Looking for test storage... 00:27:12.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:12.266 15:50:51 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:12.266 15:50:51 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:12.266 15:50:51 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:12.266 15:50:51 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:12.266 15:50:51 -- nvmf/common.sh@7 -- # uname -s 00:27:12.266 15:50:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:12.266 15:50:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:12.266 15:50:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:12.266 15:50:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:12.266 15:50:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:12.266 15:50:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:12.266 15:50:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:12.266 15:50:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:12.266 15:50:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:12.266 15:50:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:12.266 15:50:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:12.266 15:50:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:12.267 15:50:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:12.267 15:50:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:12.267 15:50:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:12.267 15:50:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:12.267 15:50:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:12.267 15:50:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:12.267 15:50:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:12.267 15:50:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.267 15:50:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.267 15:50:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.267 15:50:51 -- paths/export.sh@5 -- # export PATH 00:27:12.267 15:50:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:12.267 15:50:51 -- nvmf/common.sh@46 -- # : 0 00:27:12.267 15:50:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:12.267 15:50:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:12.525 15:50:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:12.525 15:50:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:12.525 15:50:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:12.525 15:50:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:12.525 15:50:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:12.525 15:50:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:12.525 15:50:51 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:12.525 15:50:51 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:12.525 15:50:51 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:12.525 15:50:51 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:12.525 15:50:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:12.525 15:50:51 -- common/autotest_common.sh@10 -- # set +x 00:27:12.525 15:50:51 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:12.525 15:50:51 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2235891 00:27:12.525 15:50:51 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:12.525 15:50:51 -- spdkcli/common.sh@34 -- # waitforlisten 2235891 00:27:12.525 15:50:51 -- common/autotest_common.sh@819 -- # '[' -z 2235891 ']' 00:27:12.525 15:50:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:12.525 15:50:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:12.525 15:50:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:12.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:12.525 15:50:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:12.525 15:50:51 -- common/autotest_common.sh@10 -- # set +x 00:27:12.525 [2024-07-10 15:50:51.688431] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:12.525 [2024-07-10 15:50:51.688522] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2235891 ] 00:27:12.525 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.525 [2024-07-10 15:50:51.744866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:12.525 [2024-07-10 15:50:51.851241] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:12.525 [2024-07-10 15:50:51.851454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:12.525 [2024-07-10 15:50:51.851459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.460 15:50:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:13.460 15:50:52 -- common/autotest_common.sh@852 -- # return 0 00:27:13.460 15:50:52 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:13.460 15:50:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:13.460 15:50:52 -- common/autotest_common.sh@10 -- # set +x 00:27:13.460 15:50:52 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:13.460 15:50:52 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:13.460 15:50:52 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:13.460 15:50:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:13.460 15:50:52 -- common/autotest_common.sh@10 -- # set +x 00:27:13.460 15:50:52 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:13.460 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:13.460 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:13.460 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:13.460 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:13.460 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:13.460 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:13.460 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:13.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:13.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:13.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:13.460 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:13.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:13.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:13.460 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:13.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:13.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:13.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:13.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:13.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:13.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:13.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:13.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:13.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:13.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:13.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:13.460 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:13.460 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:13.460 ' 00:27:13.718 [2024-07-10 15:50:53.038740] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:16.278 [2024-07-10 15:50:55.186069] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:17.211 [2024-07-10 15:50:56.406505] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:19.734 [2024-07-10 15:50:58.673821] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:21.634 [2024-07-10 15:51:00.620104] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:23.007 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:23.007 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:23.007 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:23.007 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:23.007 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:23.007 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:23.007 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:23.007 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:23.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:23.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:23.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:23.007 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:23.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:23.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:23.007 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:23.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:23.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:23.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:23.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:23.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:23.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:23.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:23.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:23.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:23.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:23.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:23.007 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:23.007 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:23.007 15:51:02 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:23.007 15:51:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:23.007 15:51:02 -- common/autotest_common.sh@10 -- # set +x 00:27:23.007 15:51:02 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:23.007 15:51:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:23.007 15:51:02 -- common/autotest_common.sh@10 -- # set +x 00:27:23.007 15:51:02 -- spdkcli/nvmf.sh@69 -- # check_match 00:27:23.007 15:51:02 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:23.265 15:51:02 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:23.523 15:51:02 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:23.523 15:51:02 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:23.523 15:51:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:23.523 15:51:02 -- common/autotest_common.sh@10 -- # set +x 00:27:23.523 15:51:02 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:23.523 15:51:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:23.523 15:51:02 -- common/autotest_common.sh@10 -- # set +x 00:27:23.523 15:51:02 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:23.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:23.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:23.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:23.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:23.523 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:23.523 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:23.523 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:23.523 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:23.523 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:23.523 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:23.523 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:23.523 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:23.523 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:23.523 ' 00:27:28.789 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:28.789 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:28.789 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:28.789 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:28.789 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:28.790 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:28.790 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:28.790 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:28.790 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:28.790 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:28.790 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:28.790 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:28.790 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:28.790 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:28.790 15:51:07 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:28.790 15:51:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:28.790 15:51:07 -- common/autotest_common.sh@10 -- # set +x 00:27:28.790 15:51:07 -- spdkcli/nvmf.sh@90 -- # killprocess 2235891 00:27:28.790 15:51:07 -- common/autotest_common.sh@926 -- # '[' -z 2235891 ']' 00:27:28.790 15:51:07 -- common/autotest_common.sh@930 -- # kill -0 2235891 00:27:28.790 15:51:07 -- common/autotest_common.sh@931 -- # uname 00:27:28.790 15:51:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:28.790 15:51:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2235891 00:27:28.790 15:51:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:28.790 15:51:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:28.790 15:51:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2235891' 00:27:28.790 killing process with pid 2235891 00:27:28.790 15:51:07 -- common/autotest_common.sh@945 -- # kill 2235891 00:27:28.790 [2024-07-10 15:51:07.990915] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:28.790 15:51:07 -- common/autotest_common.sh@950 -- # wait 2235891 00:27:29.098 15:51:08 -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:29.098 15:51:08 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:29.098 15:51:08 -- spdkcli/common.sh@13 -- # '[' -n 2235891 ']' 00:27:29.098 15:51:08 -- spdkcli/common.sh@14 -- # killprocess 2235891 00:27:29.098 15:51:08 -- common/autotest_common.sh@926 -- # '[' -z 2235891 ']' 00:27:29.098 15:51:08 -- common/autotest_common.sh@930 -- # kill -0 2235891 00:27:29.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2235891) - No such process 00:27:29.098 15:51:08 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2235891 is not found' 00:27:29.098 Process with pid 2235891 is not found 00:27:29.098 15:51:08 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:29.098 15:51:08 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:29.098 15:51:08 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:29.098 00:27:29.098 real 0m16.667s 00:27:29.098 user 0m35.217s 00:27:29.098 sys 0m0.824s 00:27:29.098 15:51:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:29.098 15:51:08 -- common/autotest_common.sh@10 -- # set +x 00:27:29.098 ************************************ 00:27:29.098 END TEST spdkcli_nvmf_tcp 00:27:29.098 ************************************ 00:27:29.098 15:51:08 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:29.098 15:51:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:29.098 15:51:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:29.098 15:51:08 -- common/autotest_common.sh@10 -- # set +x 00:27:29.098 ************************************ 00:27:29.098 START TEST nvmf_identify_passthru 00:27:29.098 ************************************ 00:27:29.098 15:51:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:29.098 * Looking for test storage... 00:27:29.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:29.098 15:51:08 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:29.098 15:51:08 -- nvmf/common.sh@7 -- # uname -s 00:27:29.098 15:51:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:29.098 15:51:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:29.098 15:51:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:29.098 15:51:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:29.098 15:51:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:29.098 15:51:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:29.098 15:51:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:29.098 15:51:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:29.098 15:51:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:29.098 15:51:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:29.098 15:51:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:29.098 15:51:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:29.098 15:51:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:29.098 15:51:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:29.098 15:51:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:29.098 15:51:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:29.098 15:51:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.098 15:51:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.099 15:51:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.099 15:51:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.099 15:51:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.099 15:51:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.099 15:51:08 -- paths/export.sh@5 -- # export PATH 00:27:29.099 15:51:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.099 15:51:08 -- nvmf/common.sh@46 -- # : 0 00:27:29.099 15:51:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:29.099 15:51:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:29.099 15:51:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:29.099 15:51:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:29.099 15:51:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:29.099 15:51:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:29.099 15:51:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:29.099 15:51:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:29.099 15:51:08 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:29.099 15:51:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:29.099 15:51:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:29.099 15:51:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:29.099 15:51:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.099 15:51:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.099 15:51:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.099 15:51:08 -- paths/export.sh@5 -- # export PATH 00:27:29.099 15:51:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:29.099 15:51:08 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:29.099 15:51:08 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:29.099 15:51:08 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:29.099 15:51:08 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:29.099 15:51:08 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:29.099 15:51:08 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:29.099 15:51:08 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.099 15:51:08 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:29.099 15:51:08 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:29.099 15:51:08 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:29.099 15:51:08 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:29.099 15:51:08 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:29.099 15:51:08 -- common/autotest_common.sh@10 -- # set +x 00:27:30.998 15:51:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:30.998 15:51:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:30.998 15:51:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:30.998 15:51:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:30.998 15:51:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:30.998 15:51:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:30.998 15:51:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:30.998 15:51:10 -- nvmf/common.sh@294 -- # net_devs=() 00:27:30.998 15:51:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:30.998 15:51:10 -- nvmf/common.sh@295 -- # e810=() 00:27:30.998 15:51:10 -- nvmf/common.sh@295 -- # local -ga e810 00:27:30.998 15:51:10 -- nvmf/common.sh@296 -- # x722=() 00:27:30.998 15:51:10 -- nvmf/common.sh@296 -- # local -ga x722 00:27:30.998 15:51:10 -- nvmf/common.sh@297 -- # mlx=() 00:27:30.998 15:51:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:30.998 15:51:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:30.998 15:51:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:30.998 15:51:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:30.998 15:51:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:30.998 15:51:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:30.998 15:51:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:30.998 15:51:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:30.998 15:51:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:30.998 15:51:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:30.998 15:51:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:30.998 15:51:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:30.998 15:51:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:30.998 15:51:10 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:30.998 15:51:10 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:30.998 15:51:10 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:30.998 15:51:10 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:30.998 15:51:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:30.998 15:51:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:30.998 15:51:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:30.998 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:30.998 15:51:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:30.998 15:51:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:30.998 15:51:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.998 15:51:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.998 15:51:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:30.998 15:51:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:30.998 15:51:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:30.998 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:30.998 15:51:10 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:30.998 15:51:10 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:30.998 15:51:10 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:30.998 15:51:10 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:30.998 15:51:10 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:30.998 15:51:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:30.998 15:51:10 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:30.998 15:51:10 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:30.998 15:51:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:30.998 15:51:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.998 15:51:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:30.998 15:51:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.998 15:51:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:30.998 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:30.998 15:51:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.998 15:51:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:30.998 15:51:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:30.998 15:51:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:30.998 15:51:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:30.998 15:51:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:30.998 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:30.998 15:51:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:30.998 15:51:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:30.998 15:51:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:30.998 15:51:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:30.998 15:51:10 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:30.998 15:51:10 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:30.998 15:51:10 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:30.998 15:51:10 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:30.998 15:51:10 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:30.998 15:51:10 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:30.998 15:51:10 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:30.998 15:51:10 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:30.998 15:51:10 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:30.998 15:51:10 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:30.998 15:51:10 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:30.998 15:51:10 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:30.998 15:51:10 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:30.998 15:51:10 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:30.998 15:51:10 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:31.256 15:51:10 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:31.256 15:51:10 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:31.256 15:51:10 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:31.256 15:51:10 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:31.256 15:51:10 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:31.257 15:51:10 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:31.257 15:51:10 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:31.257 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:31.257 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:27:31.257 00:27:31.257 --- 10.0.0.2 ping statistics --- 00:27:31.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.257 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:27:31.257 15:51:10 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:31.257 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:31.257 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:27:31.257 00:27:31.257 --- 10.0.0.1 ping statistics --- 00:27:31.257 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:31.257 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:27:31.257 15:51:10 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:31.257 15:51:10 -- nvmf/common.sh@410 -- # return 0 00:27:31.257 15:51:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:31.257 15:51:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:31.257 15:51:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:31.257 15:51:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:31.257 15:51:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:31.257 15:51:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:31.257 15:51:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:31.257 15:51:10 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:31.257 15:51:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:31.257 15:51:10 -- common/autotest_common.sh@10 -- # set +x 00:27:31.257 15:51:10 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:31.257 15:51:10 -- common/autotest_common.sh@1509 -- # bdfs=() 00:27:31.257 15:51:10 -- common/autotest_common.sh@1509 -- # local bdfs 00:27:31.257 15:51:10 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:27:31.257 15:51:10 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:27:31.257 15:51:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:31.257 15:51:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:31.257 15:51:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:31.257 15:51:10 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:31.257 15:51:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:31.257 15:51:10 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:31.257 15:51:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:27:31.257 15:51:10 -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:27:31.257 15:51:10 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:27:31.257 15:51:10 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:27:31.257 15:51:10 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:27:31.257 15:51:10 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:31.257 15:51:10 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:31.257 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.444 15:51:14 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:27:35.444 15:51:14 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:27:35.444 15:51:14 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:27:35.444 15:51:14 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:27:35.444 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.689 15:51:18 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:27:39.689 15:51:18 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:27:39.689 15:51:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:39.689 15:51:18 -- common/autotest_common.sh@10 -- # set +x 00:27:39.689 15:51:18 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:27:39.689 15:51:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:39.689 15:51:18 -- common/autotest_common.sh@10 -- # set +x 00:27:39.689 15:51:18 -- target/identify_passthru.sh@31 -- # nvmfpid=2241230 00:27:39.689 15:51:18 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:39.689 15:51:18 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:39.689 15:51:18 -- target/identify_passthru.sh@35 -- # waitforlisten 2241230 00:27:39.689 15:51:18 -- common/autotest_common.sh@819 -- # '[' -z 2241230 ']' 00:27:39.689 15:51:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:39.689 15:51:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:39.689 15:51:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:39.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:39.689 15:51:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:39.689 15:51:18 -- common/autotest_common.sh@10 -- # set +x 00:27:39.689 [2024-07-10 15:51:18.997965] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:39.689 [2024-07-10 15:51:18.998056] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:39.689 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.948 [2024-07-10 15:51:19.071267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:39.948 [2024-07-10 15:51:19.191262] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:39.948 [2024-07-10 15:51:19.191397] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:39.948 [2024-07-10 15:51:19.191414] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:39.948 [2024-07-10 15:51:19.191432] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:39.948 [2024-07-10 15:51:19.195450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.948 [2024-07-10 15:51:19.195480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:39.948 [2024-07-10 15:51:19.195597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:39.948 [2024-07-10 15:51:19.195601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.948 15:51:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:39.948 15:51:19 -- common/autotest_common.sh@852 -- # return 0 00:27:39.948 15:51:19 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:27:39.948 15:51:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.948 15:51:19 -- common/autotest_common.sh@10 -- # set +x 00:27:39.948 INFO: Log level set to 20 00:27:39.948 INFO: Requests: 00:27:39.948 { 00:27:39.948 "jsonrpc": "2.0", 00:27:39.948 "method": "nvmf_set_config", 00:27:39.948 "id": 1, 00:27:39.948 "params": { 00:27:39.948 "admin_cmd_passthru": { 00:27:39.948 "identify_ctrlr": true 00:27:39.948 } 00:27:39.948 } 00:27:39.948 } 00:27:39.948 00:27:39.948 INFO: response: 00:27:39.948 { 00:27:39.948 "jsonrpc": "2.0", 00:27:39.948 "id": 1, 00:27:39.948 "result": true 00:27:39.948 } 00:27:39.948 00:27:39.948 15:51:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:39.948 15:51:19 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:27:39.948 15:51:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:39.948 15:51:19 -- common/autotest_common.sh@10 -- # set +x 00:27:39.948 INFO: Setting log level to 20 00:27:39.948 INFO: Setting log level to 20 00:27:39.948 INFO: Log level set to 20 00:27:39.948 INFO: Log level set to 20 00:27:39.948 INFO: Requests: 00:27:39.948 { 00:27:39.948 "jsonrpc": "2.0", 00:27:39.948 "method": "framework_start_init", 00:27:39.948 "id": 1 00:27:39.948 } 00:27:39.948 00:27:39.948 INFO: Requests: 00:27:39.948 { 00:27:39.948 "jsonrpc": "2.0", 00:27:39.948 "method": "framework_start_init", 00:27:39.948 "id": 1 00:27:39.948 } 00:27:39.948 00:27:40.207 [2024-07-10 15:51:19.335618] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:27:40.207 INFO: response: 00:27:40.207 { 00:27:40.207 "jsonrpc": "2.0", 00:27:40.207 "id": 1, 00:27:40.207 "result": true 00:27:40.207 } 00:27:40.207 00:27:40.207 INFO: response: 00:27:40.207 { 00:27:40.207 "jsonrpc": "2.0", 00:27:40.207 "id": 1, 00:27:40.207 "result": true 00:27:40.207 } 00:27:40.207 00:27:40.207 15:51:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:40.207 15:51:19 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:40.207 15:51:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:40.207 15:51:19 -- common/autotest_common.sh@10 -- # set +x 00:27:40.207 INFO: Setting log level to 40 00:27:40.207 INFO: Setting log level to 40 00:27:40.207 INFO: Setting log level to 40 00:27:40.207 [2024-07-10 15:51:19.345576] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.207 15:51:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:40.207 15:51:19 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:27:40.207 15:51:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:40.207 15:51:19 -- common/autotest_common.sh@10 -- # set +x 00:27:40.207 15:51:19 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:27:40.207 15:51:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:40.207 15:51:19 -- common/autotest_common.sh@10 -- # set +x 00:27:43.483 Nvme0n1 00:27:43.483 15:51:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.483 15:51:22 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:27:43.483 15:51:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.483 15:51:22 -- common/autotest_common.sh@10 -- # set +x 00:27:43.483 15:51:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.483 15:51:22 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:27:43.483 15:51:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.483 15:51:22 -- common/autotest_common.sh@10 -- # set +x 00:27:43.483 15:51:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.483 15:51:22 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:43.483 15:51:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.483 15:51:22 -- common/autotest_common.sh@10 -- # set +x 00:27:43.483 [2024-07-10 15:51:22.233192] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:43.483 15:51:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.483 15:51:22 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:27:43.483 15:51:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.483 15:51:22 -- common/autotest_common.sh@10 -- # set +x 00:27:43.483 [2024-07-10 15:51:22.240929] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:27:43.483 [ 00:27:43.483 { 00:27:43.483 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:43.483 "subtype": "Discovery", 00:27:43.483 "listen_addresses": [], 00:27:43.483 "allow_any_host": true, 00:27:43.483 "hosts": [] 00:27:43.483 }, 00:27:43.483 { 00:27:43.483 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:43.483 "subtype": "NVMe", 00:27:43.483 "listen_addresses": [ 00:27:43.483 { 00:27:43.483 "transport": "TCP", 00:27:43.483 "trtype": "TCP", 00:27:43.483 "adrfam": "IPv4", 00:27:43.483 "traddr": "10.0.0.2", 00:27:43.483 "trsvcid": "4420" 00:27:43.483 } 00:27:43.483 ], 00:27:43.483 "allow_any_host": true, 00:27:43.483 "hosts": [], 00:27:43.483 "serial_number": "SPDK00000000000001", 00:27:43.483 "model_number": "SPDK bdev Controller", 00:27:43.483 "max_namespaces": 1, 00:27:43.483 "min_cntlid": 1, 00:27:43.483 "max_cntlid": 65519, 00:27:43.483 "namespaces": [ 00:27:43.483 { 00:27:43.483 "nsid": 1, 00:27:43.483 "bdev_name": "Nvme0n1", 00:27:43.483 "name": "Nvme0n1", 00:27:43.483 "nguid": "8B85AFAB375E449FAFEB7DDA4C13C8E3", 00:27:43.483 "uuid": "8b85afab-375e-449f-afeb-7dda4c13c8e3" 00:27:43.483 } 00:27:43.483 ] 00:27:43.483 } 00:27:43.483 ] 00:27:43.483 15:51:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.483 15:51:22 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:43.483 15:51:22 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:27:43.483 15:51:22 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:27:43.483 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.483 15:51:22 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:27:43.483 15:51:22 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:43.483 15:51:22 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:27:43.483 15:51:22 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:27:43.483 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.483 15:51:22 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:27:43.483 15:51:22 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:27:43.483 15:51:22 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:27:43.483 15:51:22 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:43.483 15:51:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:43.483 15:51:22 -- common/autotest_common.sh@10 -- # set +x 00:27:43.483 15:51:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:43.483 15:51:22 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:27:43.483 15:51:22 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:27:43.483 15:51:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:43.483 15:51:22 -- nvmf/common.sh@116 -- # sync 00:27:43.483 15:51:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:43.483 15:51:22 -- nvmf/common.sh@119 -- # set +e 00:27:43.483 15:51:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:43.483 15:51:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:43.483 rmmod nvme_tcp 00:27:43.483 rmmod nvme_fabrics 00:27:43.483 rmmod nvme_keyring 00:27:43.483 15:51:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:43.483 15:51:22 -- nvmf/common.sh@123 -- # set -e 00:27:43.483 15:51:22 -- nvmf/common.sh@124 -- # return 0 00:27:43.483 15:51:22 -- nvmf/common.sh@477 -- # '[' -n 2241230 ']' 00:27:43.483 15:51:22 -- nvmf/common.sh@478 -- # killprocess 2241230 00:27:43.484 15:51:22 -- common/autotest_common.sh@926 -- # '[' -z 2241230 ']' 00:27:43.484 15:51:22 -- common/autotest_common.sh@930 -- # kill -0 2241230 00:27:43.484 15:51:22 -- common/autotest_common.sh@931 -- # uname 00:27:43.484 15:51:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:43.484 15:51:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2241230 00:27:43.484 15:51:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:43.484 15:51:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:43.484 15:51:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2241230' 00:27:43.484 killing process with pid 2241230 00:27:43.484 15:51:22 -- common/autotest_common.sh@945 -- # kill 2241230 00:27:43.484 [2024-07-10 15:51:22.584173] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:27:43.484 15:51:22 -- common/autotest_common.sh@950 -- # wait 2241230 00:27:44.853 15:51:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:44.853 15:51:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:44.853 15:51:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:44.853 15:51:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:44.853 15:51:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:44.853 15:51:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.853 15:51:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:44.853 15:51:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.407 15:51:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:47.407 00:27:47.407 real 0m17.958s 00:27:47.407 user 0m26.337s 00:27:47.407 sys 0m2.287s 00:27:47.407 15:51:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:47.407 15:51:26 -- common/autotest_common.sh@10 -- # set +x 00:27:47.407 ************************************ 00:27:47.407 END TEST nvmf_identify_passthru 00:27:47.407 ************************************ 00:27:47.407 15:51:26 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:47.407 15:51:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:47.407 15:51:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:47.407 15:51:26 -- common/autotest_common.sh@10 -- # set +x 00:27:47.407 ************************************ 00:27:47.407 START TEST nvmf_dif 00:27:47.407 ************************************ 00:27:47.407 15:51:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:27:47.407 * Looking for test storage... 00:27:47.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:47.407 15:51:26 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:47.407 15:51:26 -- nvmf/common.sh@7 -- # uname -s 00:27:47.407 15:51:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:47.407 15:51:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:47.407 15:51:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:47.407 15:51:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:47.407 15:51:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:47.407 15:51:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:47.407 15:51:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:47.407 15:51:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:47.407 15:51:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:47.407 15:51:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:47.407 15:51:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:47.407 15:51:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:47.407 15:51:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:47.407 15:51:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:47.407 15:51:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:47.407 15:51:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:47.407 15:51:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:47.407 15:51:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:47.407 15:51:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:47.407 15:51:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.407 15:51:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.407 15:51:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.407 15:51:26 -- paths/export.sh@5 -- # export PATH 00:27:47.407 15:51:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:47.407 15:51:26 -- nvmf/common.sh@46 -- # : 0 00:27:47.407 15:51:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:47.407 15:51:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:47.407 15:51:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:47.407 15:51:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:47.407 15:51:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:47.407 15:51:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:47.407 15:51:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:47.407 15:51:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:47.407 15:51:26 -- target/dif.sh@15 -- # NULL_META=16 00:27:47.407 15:51:26 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:47.407 15:51:26 -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:47.407 15:51:26 -- target/dif.sh@15 -- # NULL_DIF=1 00:27:47.407 15:51:26 -- target/dif.sh@135 -- # nvmftestinit 00:27:47.407 15:51:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:47.407 15:51:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:47.407 15:51:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:47.407 15:51:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:47.407 15:51:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:47.407 15:51:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.407 15:51:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:47.407 15:51:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.407 15:51:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:47.407 15:51:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:47.407 15:51:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:47.407 15:51:26 -- common/autotest_common.sh@10 -- # set +x 00:27:49.309 15:51:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:49.309 15:51:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:49.309 15:51:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:49.309 15:51:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:49.309 15:51:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:49.309 15:51:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:49.309 15:51:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:49.309 15:51:28 -- nvmf/common.sh@294 -- # net_devs=() 00:27:49.309 15:51:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:49.309 15:51:28 -- nvmf/common.sh@295 -- # e810=() 00:27:49.309 15:51:28 -- nvmf/common.sh@295 -- # local -ga e810 00:27:49.309 15:51:28 -- nvmf/common.sh@296 -- # x722=() 00:27:49.309 15:51:28 -- nvmf/common.sh@296 -- # local -ga x722 00:27:49.309 15:51:28 -- nvmf/common.sh@297 -- # mlx=() 00:27:49.309 15:51:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:49.309 15:51:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:49.309 15:51:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:49.309 15:51:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:49.309 15:51:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:49.309 15:51:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:49.309 15:51:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:49.309 15:51:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:49.309 15:51:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:49.309 15:51:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:49.309 15:51:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:49.309 15:51:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:49.309 15:51:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:49.309 15:51:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:49.309 15:51:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:49.309 15:51:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:49.309 15:51:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:49.309 15:51:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:49.309 15:51:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:49.309 15:51:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:49.309 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:49.309 15:51:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:49.309 15:51:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:49.309 15:51:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.309 15:51:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.309 15:51:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:49.309 15:51:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:49.309 15:51:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:49.309 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:49.309 15:51:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:49.309 15:51:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:49.309 15:51:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:49.309 15:51:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:49.309 15:51:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:49.309 15:51:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:49.309 15:51:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:49.309 15:51:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:49.309 15:51:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:49.309 15:51:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.309 15:51:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:49.309 15:51:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.309 15:51:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:49.309 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:49.309 15:51:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.309 15:51:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:49.309 15:51:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:49.309 15:51:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:49.309 15:51:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:49.309 15:51:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:49.309 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:49.309 15:51:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:49.309 15:51:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:49.309 15:51:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:49.309 15:51:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:49.309 15:51:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:49.309 15:51:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:49.309 15:51:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:49.309 15:51:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:49.309 15:51:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:49.309 15:51:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:49.309 15:51:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:49.309 15:51:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:49.309 15:51:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:49.309 15:51:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:49.309 15:51:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:49.309 15:51:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:49.309 15:51:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:49.309 15:51:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:49.309 15:51:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:49.309 15:51:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:49.309 15:51:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:49.309 15:51:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:49.309 15:51:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:49.309 15:51:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:49.309 15:51:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:49.309 15:51:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:49.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:49.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:27:49.309 00:27:49.309 --- 10.0.0.2 ping statistics --- 00:27:49.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.309 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:27:49.309 15:51:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:49.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:49.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:27:49.309 00:27:49.309 --- 10.0.0.1 ping statistics --- 00:27:49.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:49.309 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:27:49.309 15:51:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:49.309 15:51:28 -- nvmf/common.sh@410 -- # return 0 00:27:49.309 15:51:28 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:27:49.309 15:51:28 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:50.244 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:50.244 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:50.244 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:50.244 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:50.244 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:50.245 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:50.245 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:50.245 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:50.245 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:50.245 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:50.245 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:50.245 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:50.245 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:50.245 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:50.245 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:50.245 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:50.245 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:50.503 15:51:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:50.503 15:51:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:50.503 15:51:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:50.503 15:51:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:50.503 15:51:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:50.503 15:51:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:50.503 15:51:29 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:50.503 15:51:29 -- target/dif.sh@137 -- # nvmfappstart 00:27:50.503 15:51:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:50.503 15:51:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:50.503 15:51:29 -- common/autotest_common.sh@10 -- # set +x 00:27:50.503 15:51:29 -- nvmf/common.sh@469 -- # nvmfpid=2244546 00:27:50.503 15:51:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:50.503 15:51:29 -- nvmf/common.sh@470 -- # waitforlisten 2244546 00:27:50.503 15:51:29 -- common/autotest_common.sh@819 -- # '[' -z 2244546 ']' 00:27:50.503 15:51:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:50.503 15:51:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:50.503 15:51:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:50.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:50.503 15:51:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:50.503 15:51:29 -- common/autotest_common.sh@10 -- # set +x 00:27:50.503 [2024-07-10 15:51:29.837189] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:27:50.503 [2024-07-10 15:51:29.837264] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:50.503 EAL: No free 2048 kB hugepages reported on node 1 00:27:50.760 [2024-07-10 15:51:29.903398] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:50.760 [2024-07-10 15:51:30.013334] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:50.760 [2024-07-10 15:51:30.013510] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:50.760 [2024-07-10 15:51:30.013529] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:50.760 [2024-07-10 15:51:30.013542] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:50.760 [2024-07-10 15:51:30.013577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.693 15:51:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:51.693 15:51:30 -- common/autotest_common.sh@852 -- # return 0 00:27:51.693 15:51:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:51.693 15:51:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:51.693 15:51:30 -- common/autotest_common.sh@10 -- # set +x 00:27:51.693 15:51:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:51.693 15:51:30 -- target/dif.sh@139 -- # create_transport 00:27:51.693 15:51:30 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:51.693 15:51:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.693 15:51:30 -- common/autotest_common.sh@10 -- # set +x 00:27:51.693 [2024-07-10 15:51:30.814633] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.693 15:51:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.693 15:51:30 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:51.693 15:51:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:27:51.693 15:51:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:51.693 15:51:30 -- common/autotest_common.sh@10 -- # set +x 00:27:51.693 ************************************ 00:27:51.693 START TEST fio_dif_1_default 00:27:51.693 ************************************ 00:27:51.693 15:51:30 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:27:51.693 15:51:30 -- target/dif.sh@86 -- # create_subsystems 0 00:27:51.693 15:51:30 -- target/dif.sh@28 -- # local sub 00:27:51.693 15:51:30 -- target/dif.sh@30 -- # for sub in "$@" 00:27:51.693 15:51:30 -- target/dif.sh@31 -- # create_subsystem 0 00:27:51.693 15:51:30 -- target/dif.sh@18 -- # local sub_id=0 00:27:51.693 15:51:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:51.693 15:51:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.693 15:51:30 -- common/autotest_common.sh@10 -- # set +x 00:27:51.693 bdev_null0 00:27:51.693 15:51:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.693 15:51:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:51.693 15:51:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.693 15:51:30 -- common/autotest_common.sh@10 -- # set +x 00:27:51.693 15:51:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.693 15:51:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:51.693 15:51:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.693 15:51:30 -- common/autotest_common.sh@10 -- # set +x 00:27:51.693 15:51:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.693 15:51:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:51.693 15:51:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:51.693 15:51:30 -- common/autotest_common.sh@10 -- # set +x 00:27:51.693 [2024-07-10 15:51:30.850877] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.693 15:51:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:51.693 15:51:30 -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:51.693 15:51:30 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:51.693 15:51:30 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:51.693 15:51:30 -- nvmf/common.sh@520 -- # config=() 00:27:51.693 15:51:30 -- nvmf/common.sh@520 -- # local subsystem config 00:27:51.693 15:51:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:27:51.693 15:51:30 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:51.693 15:51:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:27:51.693 { 00:27:51.693 "params": { 00:27:51.693 "name": "Nvme$subsystem", 00:27:51.693 "trtype": "$TEST_TRANSPORT", 00:27:51.693 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.693 "adrfam": "ipv4", 00:27:51.693 "trsvcid": "$NVMF_PORT", 00:27:51.693 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.693 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.693 "hdgst": ${hdgst:-false}, 00:27:51.693 "ddgst": ${ddgst:-false} 00:27:51.693 }, 00:27:51.693 "method": "bdev_nvme_attach_controller" 00:27:51.693 } 00:27:51.693 EOF 00:27:51.693 )") 00:27:51.693 15:51:30 -- target/dif.sh@82 -- # gen_fio_conf 00:27:51.693 15:51:30 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:51.693 15:51:30 -- target/dif.sh@54 -- # local file 00:27:51.693 15:51:30 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:51.693 15:51:30 -- target/dif.sh@56 -- # cat 00:27:51.693 15:51:30 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:51.693 15:51:30 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:51.693 15:51:30 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:51.693 15:51:30 -- common/autotest_common.sh@1320 -- # shift 00:27:51.693 15:51:30 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:51.693 15:51:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:51.693 15:51:30 -- nvmf/common.sh@542 -- # cat 00:27:51.693 15:51:30 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:51.693 15:51:30 -- target/dif.sh@72 -- # (( file <= files )) 00:27:51.693 15:51:30 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:51.693 15:51:30 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:51.693 15:51:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:51.693 15:51:30 -- nvmf/common.sh@544 -- # jq . 00:27:51.693 15:51:30 -- nvmf/common.sh@545 -- # IFS=, 00:27:51.693 15:51:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:27:51.693 "params": { 00:27:51.693 "name": "Nvme0", 00:27:51.693 "trtype": "tcp", 00:27:51.693 "traddr": "10.0.0.2", 00:27:51.693 "adrfam": "ipv4", 00:27:51.693 "trsvcid": "4420", 00:27:51.693 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:51.693 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:51.693 "hdgst": false, 00:27:51.693 "ddgst": false 00:27:51.693 }, 00:27:51.693 "method": "bdev_nvme_attach_controller" 00:27:51.693 }' 00:27:51.693 15:51:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:51.693 15:51:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:51.693 15:51:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:51.693 15:51:30 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:27:51.693 15:51:30 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:51.693 15:51:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:51.693 15:51:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:51.693 15:51:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:51.693 15:51:30 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:51.693 15:51:30 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:51.951 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:51.951 fio-3.35 00:27:51.951 Starting 1 thread 00:27:51.951 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.209 [2024-07-10 15:51:31.443864] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:27:52.209 [2024-07-10 15:51:31.443945] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:04.403 00:28:04.403 filename0: (groupid=0, jobs=1): err= 0: pid=2244788: Wed Jul 10 15:51:41 2024 00:28:04.403 read: IOPS=97, BW=388KiB/s (398kB/s)(3888KiB/10011msec) 00:28:04.403 slat (nsec): min=4399, max=45879, avg=9573.34, stdev=2819.63 00:28:04.403 clat (usec): min=40857, max=47924, avg=41167.10, stdev=567.37 00:28:04.403 lat (usec): min=40865, max=47939, avg=41176.68, stdev=567.43 00:28:04.403 clat percentiles (usec): 00:28:04.403 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:28:04.403 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:04.403 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:28:04.403 | 99.00th=[42206], 99.50th=[42206], 99.90th=[47973], 99.95th=[47973], 00:28:04.403 | 99.99th=[47973] 00:28:04.403 bw ( KiB/s): min= 384, max= 416, per=99.65%, avg=387.20, stdev= 9.85, samples=20 00:28:04.403 iops : min= 96, max= 104, avg=96.80, stdev= 2.46, samples=20 00:28:04.403 lat (msec) : 50=100.00% 00:28:04.403 cpu : usr=90.26%, sys=9.46%, ctx=39, majf=0, minf=232 00:28:04.403 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:04.403 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.403 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:04.403 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:04.403 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:04.403 00:28:04.403 Run status group 0 (all jobs): 00:28:04.403 READ: bw=388KiB/s (398kB/s), 388KiB/s-388KiB/s (398kB/s-398kB/s), io=3888KiB (3981kB), run=10011-10011msec 00:28:04.403 15:51:41 -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:04.403 15:51:41 -- target/dif.sh@43 -- # local sub 00:28:04.403 15:51:41 -- target/dif.sh@45 -- # for sub in "$@" 00:28:04.403 15:51:41 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:04.403 15:51:41 -- target/dif.sh@36 -- # local sub_id=0 00:28:04.403 15:51:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:04.403 15:51:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:04.403 15:51:41 -- common/autotest_common.sh@10 -- # set +x 00:28:04.403 15:51:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:04.403 15:51:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:04.403 15:51:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:04.403 15:51:41 -- common/autotest_common.sh@10 -- # set +x 00:28:04.403 15:51:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:04.403 00:28:04.403 real 0m10.993s 00:28:04.403 user 0m9.991s 00:28:04.403 sys 0m1.199s 00:28:04.403 15:51:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:04.403 15:51:41 -- common/autotest_common.sh@10 -- # set +x 00:28:04.403 ************************************ 00:28:04.403 END TEST fio_dif_1_default 00:28:04.403 ************************************ 00:28:04.403 15:51:41 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:04.403 15:51:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:04.403 15:51:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:04.403 15:51:41 -- common/autotest_common.sh@10 -- # set +x 00:28:04.403 ************************************ 00:28:04.403 START TEST fio_dif_1_multi_subsystems 00:28:04.403 ************************************ 00:28:04.403 15:51:41 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:28:04.403 15:51:41 -- target/dif.sh@92 -- # local files=1 00:28:04.403 15:51:41 -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:04.403 15:51:41 -- target/dif.sh@28 -- # local sub 00:28:04.403 15:51:41 -- target/dif.sh@30 -- # for sub in "$@" 00:28:04.403 15:51:41 -- target/dif.sh@31 -- # create_subsystem 0 00:28:04.403 15:51:41 -- target/dif.sh@18 -- # local sub_id=0 00:28:04.403 15:51:41 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:04.403 15:51:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:04.403 15:51:41 -- common/autotest_common.sh@10 -- # set +x 00:28:04.403 bdev_null0 00:28:04.403 15:51:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:04.403 15:51:41 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:04.403 15:51:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:04.403 15:51:41 -- common/autotest_common.sh@10 -- # set +x 00:28:04.403 15:51:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:04.403 15:51:41 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:04.403 15:51:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:04.403 15:51:41 -- common/autotest_common.sh@10 -- # set +x 00:28:04.403 15:51:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:04.403 15:51:41 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:04.403 15:51:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:04.403 15:51:41 -- common/autotest_common.sh@10 -- # set +x 00:28:04.403 [2024-07-10 15:51:41.875832] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.403 15:51:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:04.403 15:51:41 -- target/dif.sh@30 -- # for sub in "$@" 00:28:04.403 15:51:41 -- target/dif.sh@31 -- # create_subsystem 1 00:28:04.403 15:51:41 -- target/dif.sh@18 -- # local sub_id=1 00:28:04.403 15:51:41 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:04.403 15:51:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:04.403 15:51:41 -- common/autotest_common.sh@10 -- # set +x 00:28:04.403 bdev_null1 00:28:04.403 15:51:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:04.403 15:51:41 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:04.403 15:51:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:04.403 15:51:41 -- common/autotest_common.sh@10 -- # set +x 00:28:04.403 15:51:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:04.403 15:51:41 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:04.403 15:51:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:04.403 15:51:41 -- common/autotest_common.sh@10 -- # set +x 00:28:04.403 15:51:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:04.403 15:51:41 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:04.403 15:51:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:04.403 15:51:41 -- common/autotest_common.sh@10 -- # set +x 00:28:04.403 15:51:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:04.403 15:51:41 -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:04.403 15:51:41 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:04.403 15:51:41 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:04.403 15:51:41 -- nvmf/common.sh@520 -- # config=() 00:28:04.403 15:51:41 -- nvmf/common.sh@520 -- # local subsystem config 00:28:04.403 15:51:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:04.403 15:51:41 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:04.403 15:51:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:04.403 { 00:28:04.403 "params": { 00:28:04.403 "name": "Nvme$subsystem", 00:28:04.403 "trtype": "$TEST_TRANSPORT", 00:28:04.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.403 "adrfam": "ipv4", 00:28:04.403 "trsvcid": "$NVMF_PORT", 00:28:04.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.403 "hdgst": ${hdgst:-false}, 00:28:04.403 "ddgst": ${ddgst:-false} 00:28:04.403 }, 00:28:04.403 "method": "bdev_nvme_attach_controller" 00:28:04.403 } 00:28:04.403 EOF 00:28:04.403 )") 00:28:04.403 15:51:41 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:04.403 15:51:41 -- target/dif.sh@82 -- # gen_fio_conf 00:28:04.403 15:51:41 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:04.403 15:51:41 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:04.403 15:51:41 -- target/dif.sh@54 -- # local file 00:28:04.403 15:51:41 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:04.403 15:51:41 -- target/dif.sh@56 -- # cat 00:28:04.403 15:51:41 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:04.403 15:51:41 -- common/autotest_common.sh@1320 -- # shift 00:28:04.403 15:51:41 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:04.403 15:51:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:04.403 15:51:41 -- nvmf/common.sh@542 -- # cat 00:28:04.403 15:51:41 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:04.403 15:51:41 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:04.403 15:51:41 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:04.403 15:51:41 -- target/dif.sh@72 -- # (( file <= files )) 00:28:04.403 15:51:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:04.403 15:51:41 -- target/dif.sh@73 -- # cat 00:28:04.403 15:51:41 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:04.403 15:51:41 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:04.403 { 00:28:04.403 "params": { 00:28:04.403 "name": "Nvme$subsystem", 00:28:04.403 "trtype": "$TEST_TRANSPORT", 00:28:04.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:04.403 "adrfam": "ipv4", 00:28:04.403 "trsvcid": "$NVMF_PORT", 00:28:04.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:04.404 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:04.404 "hdgst": ${hdgst:-false}, 00:28:04.404 "ddgst": ${ddgst:-false} 00:28:04.404 }, 00:28:04.404 "method": "bdev_nvme_attach_controller" 00:28:04.404 } 00:28:04.404 EOF 00:28:04.404 )") 00:28:04.404 15:51:41 -- nvmf/common.sh@542 -- # cat 00:28:04.404 15:51:41 -- target/dif.sh@72 -- # (( file++ )) 00:28:04.404 15:51:41 -- target/dif.sh@72 -- # (( file <= files )) 00:28:04.404 15:51:41 -- nvmf/common.sh@544 -- # jq . 00:28:04.404 15:51:41 -- nvmf/common.sh@545 -- # IFS=, 00:28:04.404 15:51:41 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:04.404 "params": { 00:28:04.404 "name": "Nvme0", 00:28:04.404 "trtype": "tcp", 00:28:04.404 "traddr": "10.0.0.2", 00:28:04.404 "adrfam": "ipv4", 00:28:04.404 "trsvcid": "4420", 00:28:04.404 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:04.404 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:04.404 "hdgst": false, 00:28:04.404 "ddgst": false 00:28:04.404 }, 00:28:04.404 "method": "bdev_nvme_attach_controller" 00:28:04.404 },{ 00:28:04.404 "params": { 00:28:04.404 "name": "Nvme1", 00:28:04.404 "trtype": "tcp", 00:28:04.404 "traddr": "10.0.0.2", 00:28:04.404 "adrfam": "ipv4", 00:28:04.404 "trsvcid": "4420", 00:28:04.404 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:04.404 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:04.404 "hdgst": false, 00:28:04.404 "ddgst": false 00:28:04.404 }, 00:28:04.404 "method": "bdev_nvme_attach_controller" 00:28:04.404 }' 00:28:04.404 15:51:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:04.404 15:51:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:04.404 15:51:41 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:04.404 15:51:41 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:04.404 15:51:41 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:04.404 15:51:41 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:04.404 15:51:41 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:04.404 15:51:41 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:04.404 15:51:41 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:04.404 15:51:41 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:04.404 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:04.404 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:04.404 fio-3.35 00:28:04.404 Starting 2 threads 00:28:04.404 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.404 [2024-07-10 15:51:42.694893] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:04.404 [2024-07-10 15:51:42.694971] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:14.363 00:28:14.363 filename0: (groupid=0, jobs=1): err= 0: pid=2246225: Wed Jul 10 15:51:52 2024 00:28:14.363 read: IOPS=188, BW=754KiB/s (772kB/s)(7568KiB/10037msec) 00:28:14.363 slat (nsec): min=6649, max=64832, avg=9913.80, stdev=5376.04 00:28:14.363 clat (usec): min=784, max=45276, avg=21187.16, stdev=20172.48 00:28:14.363 lat (usec): min=791, max=45295, avg=21197.07, stdev=20171.31 00:28:14.363 clat percentiles (usec): 00:28:14.363 | 1.00th=[ 799], 5.00th=[ 816], 10.00th=[ 832], 20.00th=[ 857], 00:28:14.363 | 30.00th=[ 873], 40.00th=[ 889], 50.00th=[41157], 60.00th=[41157], 00:28:14.363 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42730], 00:28:14.363 | 99.00th=[42730], 99.50th=[43254], 99.90th=[45351], 99.95th=[45351], 00:28:14.363 | 99.99th=[45351] 00:28:14.363 bw ( KiB/s): min= 704, max= 768, per=66.26%, avg=755.20, stdev=26.27, samples=20 00:28:14.363 iops : min= 176, max= 192, avg=188.80, stdev= 6.57, samples=20 00:28:14.363 lat (usec) : 1000=44.61% 00:28:14.363 lat (msec) : 2=0.53%, 4=4.76%, 50=50.11% 00:28:14.363 cpu : usr=96.97%, sys=2.74%, ctx=15, majf=0, minf=160 00:28:14.363 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:14.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:14.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:14.363 issued rwts: total=1892,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:14.363 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:14.363 filename1: (groupid=0, jobs=1): err= 0: pid=2246226: Wed Jul 10 15:51:52 2024 00:28:14.363 read: IOPS=96, BW=386KiB/s (395kB/s)(3872KiB/10040msec) 00:28:14.363 slat (usec): min=7, max=116, avg=12.84, stdev= 7.39 00:28:14.363 clat (usec): min=40867, max=45654, avg=41444.60, stdev=733.22 00:28:14.363 lat (usec): min=40889, max=45699, avg=41457.44, stdev=735.81 00:28:14.363 clat percentiles (usec): 00:28:14.363 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:28:14.363 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:28:14.363 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:28:14.363 | 99.00th=[43254], 99.50th=[44303], 99.90th=[45876], 99.95th=[45876], 00:28:14.363 | 99.99th=[45876] 00:28:14.363 bw ( KiB/s): min= 352, max= 416, per=33.79%, avg=385.60, stdev=12.61, samples=20 00:28:14.363 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:28:14.363 lat (msec) : 50=100.00% 00:28:14.363 cpu : usr=97.14%, sys=2.49%, ctx=14, majf=0, minf=214 00:28:14.363 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:14.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:14.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:14.363 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:14.363 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:14.363 00:28:14.363 Run status group 0 (all jobs): 00:28:14.363 READ: bw=1139KiB/s (1167kB/s), 386KiB/s-754KiB/s (395kB/s-772kB/s), io=11.2MiB (11.7MB), run=10037-10040msec 00:28:14.363 15:51:53 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:14.363 15:51:53 -- target/dif.sh@43 -- # local sub 00:28:14.363 15:51:53 -- target/dif.sh@45 -- # for sub in "$@" 00:28:14.363 15:51:53 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:14.363 15:51:53 -- target/dif.sh@36 -- # local sub_id=0 00:28:14.363 15:51:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:14.363 15:51:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.363 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:28:14.363 15:51:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.363 15:51:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:14.363 15:51:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.363 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:28:14.363 15:51:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.363 15:51:53 -- target/dif.sh@45 -- # for sub in "$@" 00:28:14.363 15:51:53 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:14.363 15:51:53 -- target/dif.sh@36 -- # local sub_id=1 00:28:14.363 15:51:53 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:14.363 15:51:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.363 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:28:14.363 15:51:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.363 15:51:53 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:14.363 15:51:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.363 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:28:14.363 15:51:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.363 00:28:14.363 real 0m11.356s 00:28:14.363 user 0m20.806s 00:28:14.363 sys 0m0.842s 00:28:14.363 15:51:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:14.363 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:28:14.363 ************************************ 00:28:14.363 END TEST fio_dif_1_multi_subsystems 00:28:14.363 ************************************ 00:28:14.363 15:51:53 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:14.363 15:51:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:14.363 15:51:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:14.363 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:28:14.363 ************************************ 00:28:14.363 START TEST fio_dif_rand_params 00:28:14.363 ************************************ 00:28:14.363 15:51:53 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:28:14.363 15:51:53 -- target/dif.sh@100 -- # local NULL_DIF 00:28:14.363 15:51:53 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:14.363 15:51:53 -- target/dif.sh@103 -- # NULL_DIF=3 00:28:14.363 15:51:53 -- target/dif.sh@103 -- # bs=128k 00:28:14.363 15:51:53 -- target/dif.sh@103 -- # numjobs=3 00:28:14.363 15:51:53 -- target/dif.sh@103 -- # iodepth=3 00:28:14.363 15:51:53 -- target/dif.sh@103 -- # runtime=5 00:28:14.363 15:51:53 -- target/dif.sh@105 -- # create_subsystems 0 00:28:14.363 15:51:53 -- target/dif.sh@28 -- # local sub 00:28:14.363 15:51:53 -- target/dif.sh@30 -- # for sub in "$@" 00:28:14.363 15:51:53 -- target/dif.sh@31 -- # create_subsystem 0 00:28:14.363 15:51:53 -- target/dif.sh@18 -- # local sub_id=0 00:28:14.363 15:51:53 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:14.363 15:51:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.363 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:28:14.363 bdev_null0 00:28:14.363 15:51:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.363 15:51:53 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:14.363 15:51:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.363 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:28:14.363 15:51:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.364 15:51:53 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:14.364 15:51:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.364 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:28:14.364 15:51:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.364 15:51:53 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:14.364 15:51:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:14.364 15:51:53 -- common/autotest_common.sh@10 -- # set +x 00:28:14.364 [2024-07-10 15:51:53.260880] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:14.364 15:51:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:14.364 15:51:53 -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:14.364 15:51:53 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:14.364 15:51:53 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:14.364 15:51:53 -- nvmf/common.sh@520 -- # config=() 00:28:14.364 15:51:53 -- nvmf/common.sh@520 -- # local subsystem config 00:28:14.364 15:51:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:14.364 15:51:53 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:14.364 15:51:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:14.364 { 00:28:14.364 "params": { 00:28:14.364 "name": "Nvme$subsystem", 00:28:14.364 "trtype": "$TEST_TRANSPORT", 00:28:14.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:14.364 "adrfam": "ipv4", 00:28:14.364 "trsvcid": "$NVMF_PORT", 00:28:14.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:14.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:14.364 "hdgst": ${hdgst:-false}, 00:28:14.364 "ddgst": ${ddgst:-false} 00:28:14.364 }, 00:28:14.364 "method": "bdev_nvme_attach_controller" 00:28:14.364 } 00:28:14.364 EOF 00:28:14.364 )") 00:28:14.364 15:51:53 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:14.364 15:51:53 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:14.364 15:51:53 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:14.364 15:51:53 -- target/dif.sh@82 -- # gen_fio_conf 00:28:14.364 15:51:53 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:14.364 15:51:53 -- target/dif.sh@54 -- # local file 00:28:14.364 15:51:53 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:14.364 15:51:53 -- common/autotest_common.sh@1320 -- # shift 00:28:14.364 15:51:53 -- target/dif.sh@56 -- # cat 00:28:14.364 15:51:53 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:14.364 15:51:53 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:14.364 15:51:53 -- nvmf/common.sh@542 -- # cat 00:28:14.364 15:51:53 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:14.364 15:51:53 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:14.364 15:51:53 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:14.364 15:51:53 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:14.364 15:51:53 -- target/dif.sh@72 -- # (( file <= files )) 00:28:14.364 15:51:53 -- nvmf/common.sh@544 -- # jq . 00:28:14.364 15:51:53 -- nvmf/common.sh@545 -- # IFS=, 00:28:14.364 15:51:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:14.364 "params": { 00:28:14.364 "name": "Nvme0", 00:28:14.364 "trtype": "tcp", 00:28:14.364 "traddr": "10.0.0.2", 00:28:14.364 "adrfam": "ipv4", 00:28:14.364 "trsvcid": "4420", 00:28:14.364 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:14.364 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:14.364 "hdgst": false, 00:28:14.364 "ddgst": false 00:28:14.364 }, 00:28:14.364 "method": "bdev_nvme_attach_controller" 00:28:14.364 }' 00:28:14.364 15:51:53 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:14.364 15:51:53 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:14.364 15:51:53 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:14.364 15:51:53 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:14.364 15:51:53 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:14.364 15:51:53 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:14.364 15:51:53 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:14.364 15:51:53 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:14.364 15:51:53 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:14.364 15:51:53 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:14.364 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:14.364 ... 00:28:14.364 fio-3.35 00:28:14.364 Starting 3 threads 00:28:14.364 EAL: No free 2048 kB hugepages reported on node 1 00:28:14.665 [2024-07-10 15:51:53.885600] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:14.665 [2024-07-10 15:51:53.885670] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:19.922 00:28:19.922 filename0: (groupid=0, jobs=1): err= 0: pid=2247663: Wed Jul 10 15:51:59 2024 00:28:19.922 read: IOPS=161, BW=20.2MiB/s (21.2MB/s)(102MiB/5047msec) 00:28:19.922 slat (nsec): min=3688, max=34659, avg=12963.94, stdev=2758.96 00:28:19.922 clat (usec): min=6941, max=94077, avg=18455.67, stdev=15938.11 00:28:19.922 lat (usec): min=6953, max=94090, avg=18468.63, stdev=15937.97 00:28:19.922 clat percentiles (usec): 00:28:19.922 | 1.00th=[ 7635], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[ 9634], 00:28:19.922 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11731], 60.00th=[12649], 00:28:19.922 | 70.00th=[13829], 80.00th=[16188], 90.00th=[51119], 95.00th=[52691], 00:28:19.922 | 99.00th=[55837], 99.50th=[56361], 99.90th=[93848], 99.95th=[93848], 00:28:19.922 | 99.99th=[93848] 00:28:19.922 bw ( KiB/s): min=12032, max=29440, per=27.72%, avg=20787.20, stdev=5658.83, samples=10 00:28:19.922 iops : min= 94, max= 230, avg=162.40, stdev=44.21, samples=10 00:28:19.922 lat (msec) : 10=25.03%, 20=57.91%, 50=2.58%, 100=14.48% 00:28:19.922 cpu : usr=92.69%, sys=6.38%, ctx=260, majf=0, minf=86 00:28:19.922 IO depths : 1=2.3%, 2=97.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:19.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.922 issued rwts: total=815,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:19.922 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:19.922 filename0: (groupid=0, jobs=1): err= 0: pid=2247664: Wed Jul 10 15:51:59 2024 00:28:19.922 read: IOPS=211, BW=26.4MiB/s (27.7MB/s)(133MiB/5045msec) 00:28:19.922 slat (nsec): min=4159, max=32564, avg=14292.74, stdev=3682.57 00:28:19.922 clat (usec): min=5359, max=91954, avg=14081.20, stdev=15122.23 00:28:19.922 lat (usec): min=5371, max=91976, avg=14095.49, stdev=15122.32 00:28:19.922 clat percentiles (usec): 00:28:19.922 | 1.00th=[ 5735], 5.00th=[ 6063], 10.00th=[ 6325], 20.00th=[ 7177], 00:28:19.922 | 30.00th=[ 7832], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9765], 00:28:19.922 | 70.00th=[10814], 80.00th=[11731], 90.00th=[48497], 95.00th=[50070], 00:28:19.922 | 99.00th=[88605], 99.50th=[89654], 99.90th=[91751], 99.95th=[91751], 00:28:19.922 | 99.99th=[91751] 00:28:19.922 bw ( KiB/s): min=20736, max=37888, per=36.33%, avg=27243.90, stdev=5607.09, samples=10 00:28:19.922 iops : min= 162, max= 296, avg=212.80, stdev=43.80, samples=10 00:28:19.922 lat (msec) : 10=63.07%, 20=25.30%, 50=6.09%, 100=5.53% 00:28:19.922 cpu : usr=92.51%, sys=6.94%, ctx=8, majf=0, minf=73 00:28:19.922 IO depths : 1=2.4%, 2=97.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:19.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.922 issued rwts: total=1067,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:19.922 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:19.922 filename0: (groupid=0, jobs=1): err= 0: pid=2247665: Wed Jul 10 15:51:59 2024 00:28:19.922 read: IOPS=214, BW=26.9MiB/s (28.2MB/s)(134MiB/5004msec) 00:28:19.922 slat (nsec): min=4280, max=35997, avg=13544.09, stdev=4093.61 00:28:19.922 clat (usec): min=5187, max=92702, avg=13941.14, stdev=13908.46 00:28:19.922 lat (usec): min=5199, max=92716, avg=13954.68, stdev=13908.74 00:28:19.922 clat percentiles (usec): 00:28:19.922 | 1.00th=[ 5473], 5.00th=[ 5866], 10.00th=[ 6063], 20.00th=[ 6718], 00:28:19.922 | 30.00th=[ 7570], 40.00th=[ 8291], 50.00th=[ 8848], 60.00th=[ 9896], 00:28:19.922 | 70.00th=[11207], 80.00th=[12387], 90.00th=[48497], 95.00th=[50070], 00:28:19.922 | 99.00th=[53216], 99.50th=[54264], 99.90th=[89654], 99.95th=[92799], 00:28:19.922 | 99.99th=[92799] 00:28:19.922 bw ( KiB/s): min=19200, max=38144, per=36.63%, avg=27468.80, stdev=5896.09, samples=10 00:28:19.922 iops : min= 150, max= 298, avg=214.60, stdev=46.06, samples=10 00:28:19.922 lat (msec) : 10=60.84%, 20=27.07%, 50=7.16%, 100=4.93% 00:28:19.922 cpu : usr=87.05%, sys=9.25%, ctx=786, majf=0, minf=125 00:28:19.922 IO depths : 1=1.8%, 2=98.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:19.922 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.922 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.922 issued rwts: total=1075,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:19.922 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:19.922 00:28:19.922 Run status group 0 (all jobs): 00:28:19.922 READ: bw=73.2MiB/s (76.8MB/s), 20.2MiB/s-26.9MiB/s (21.2MB/s-28.2MB/s), io=370MiB (388MB), run=5004-5047msec 00:28:19.922 15:51:59 -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:19.922 15:51:59 -- target/dif.sh@43 -- # local sub 00:28:19.922 15:51:59 -- target/dif.sh@45 -- # for sub in "$@" 00:28:19.922 15:51:59 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:19.922 15:51:59 -- target/dif.sh@36 -- # local sub_id=0 00:28:19.922 15:51:59 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:19.922 15:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:19.922 15:51:59 -- common/autotest_common.sh@10 -- # set +x 00:28:19.922 15:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:19.922 15:51:59 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:19.922 15:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:19.922 15:51:59 -- common/autotest_common.sh@10 -- # set +x 00:28:19.922 15:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:19.922 15:51:59 -- target/dif.sh@109 -- # NULL_DIF=2 00:28:19.922 15:51:59 -- target/dif.sh@109 -- # bs=4k 00:28:19.922 15:51:59 -- target/dif.sh@109 -- # numjobs=8 00:28:19.922 15:51:59 -- target/dif.sh@109 -- # iodepth=16 00:28:19.922 15:51:59 -- target/dif.sh@109 -- # runtime= 00:28:19.922 15:51:59 -- target/dif.sh@109 -- # files=2 00:28:19.922 15:51:59 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:19.922 15:51:59 -- target/dif.sh@28 -- # local sub 00:28:19.922 15:51:59 -- target/dif.sh@30 -- # for sub in "$@" 00:28:19.922 15:51:59 -- target/dif.sh@31 -- # create_subsystem 0 00:28:19.922 15:51:59 -- target/dif.sh@18 -- # local sub_id=0 00:28:19.922 15:51:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:19.922 15:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:19.922 15:51:59 -- common/autotest_common.sh@10 -- # set +x 00:28:19.922 bdev_null0 00:28:19.922 15:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:19.922 15:51:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:19.922 15:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:19.922 15:51:59 -- common/autotest_common.sh@10 -- # set +x 00:28:19.922 15:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:19.922 15:51:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:19.922 15:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:19.922 15:51:59 -- common/autotest_common.sh@10 -- # set +x 00:28:19.922 15:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:19.922 15:51:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:19.922 15:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:19.922 15:51:59 -- common/autotest_common.sh@10 -- # set +x 00:28:19.922 [2024-07-10 15:51:59.289837] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.922 15:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:19.922 15:51:59 -- target/dif.sh@30 -- # for sub in "$@" 00:28:19.922 15:51:59 -- target/dif.sh@31 -- # create_subsystem 1 00:28:19.922 15:51:59 -- target/dif.sh@18 -- # local sub_id=1 00:28:19.922 15:51:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:19.922 15:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:19.922 15:51:59 -- common/autotest_common.sh@10 -- # set +x 00:28:20.203 bdev_null1 00:28:20.203 15:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.203 15:51:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:20.203 15:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.203 15:51:59 -- common/autotest_common.sh@10 -- # set +x 00:28:20.203 15:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.203 15:51:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:20.203 15:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.203 15:51:59 -- common/autotest_common.sh@10 -- # set +x 00:28:20.203 15:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.203 15:51:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:20.203 15:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.203 15:51:59 -- common/autotest_common.sh@10 -- # set +x 00:28:20.203 15:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.203 15:51:59 -- target/dif.sh@30 -- # for sub in "$@" 00:28:20.203 15:51:59 -- target/dif.sh@31 -- # create_subsystem 2 00:28:20.203 15:51:59 -- target/dif.sh@18 -- # local sub_id=2 00:28:20.203 15:51:59 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:20.203 15:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.203 15:51:59 -- common/autotest_common.sh@10 -- # set +x 00:28:20.203 bdev_null2 00:28:20.203 15:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.203 15:51:59 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:20.203 15:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.203 15:51:59 -- common/autotest_common.sh@10 -- # set +x 00:28:20.203 15:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.203 15:51:59 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:20.203 15:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.203 15:51:59 -- common/autotest_common.sh@10 -- # set +x 00:28:20.203 15:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.203 15:51:59 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:20.203 15:51:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:20.203 15:51:59 -- common/autotest_common.sh@10 -- # set +x 00:28:20.203 15:51:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:20.203 15:51:59 -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:20.203 15:51:59 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:20.203 15:51:59 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:20.203 15:51:59 -- nvmf/common.sh@520 -- # config=() 00:28:20.203 15:51:59 -- nvmf/common.sh@520 -- # local subsystem config 00:28:20.203 15:51:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:20.203 15:51:59 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:20.203 15:51:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:20.203 { 00:28:20.203 "params": { 00:28:20.203 "name": "Nvme$subsystem", 00:28:20.203 "trtype": "$TEST_TRANSPORT", 00:28:20.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.203 "adrfam": "ipv4", 00:28:20.203 "trsvcid": "$NVMF_PORT", 00:28:20.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.203 "hdgst": ${hdgst:-false}, 00:28:20.203 "ddgst": ${ddgst:-false} 00:28:20.203 }, 00:28:20.203 "method": "bdev_nvme_attach_controller" 00:28:20.203 } 00:28:20.203 EOF 00:28:20.203 )") 00:28:20.203 15:51:59 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:20.203 15:51:59 -- target/dif.sh@82 -- # gen_fio_conf 00:28:20.203 15:51:59 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:20.203 15:51:59 -- target/dif.sh@54 -- # local file 00:28:20.203 15:51:59 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:20.203 15:51:59 -- target/dif.sh@56 -- # cat 00:28:20.203 15:51:59 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:20.203 15:51:59 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:20.203 15:51:59 -- common/autotest_common.sh@1320 -- # shift 00:28:20.203 15:51:59 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:20.203 15:51:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:20.203 15:51:59 -- nvmf/common.sh@542 -- # cat 00:28:20.203 15:51:59 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:20.203 15:51:59 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:20.203 15:51:59 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:20.203 15:51:59 -- target/dif.sh@72 -- # (( file <= files )) 00:28:20.203 15:51:59 -- target/dif.sh@73 -- # cat 00:28:20.203 15:51:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:20.203 15:51:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:20.203 15:51:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:20.203 { 00:28:20.203 "params": { 00:28:20.203 "name": "Nvme$subsystem", 00:28:20.203 "trtype": "$TEST_TRANSPORT", 00:28:20.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.203 "adrfam": "ipv4", 00:28:20.203 "trsvcid": "$NVMF_PORT", 00:28:20.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.203 "hdgst": ${hdgst:-false}, 00:28:20.203 "ddgst": ${ddgst:-false} 00:28:20.203 }, 00:28:20.203 "method": "bdev_nvme_attach_controller" 00:28:20.203 } 00:28:20.203 EOF 00:28:20.203 )") 00:28:20.203 15:51:59 -- nvmf/common.sh@542 -- # cat 00:28:20.203 15:51:59 -- target/dif.sh@72 -- # (( file++ )) 00:28:20.203 15:51:59 -- target/dif.sh@72 -- # (( file <= files )) 00:28:20.203 15:51:59 -- target/dif.sh@73 -- # cat 00:28:20.203 15:51:59 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:20.203 15:51:59 -- target/dif.sh@72 -- # (( file++ )) 00:28:20.203 15:51:59 -- target/dif.sh@72 -- # (( file <= files )) 00:28:20.203 15:51:59 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:20.203 { 00:28:20.203 "params": { 00:28:20.203 "name": "Nvme$subsystem", 00:28:20.203 "trtype": "$TEST_TRANSPORT", 00:28:20.203 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.203 "adrfam": "ipv4", 00:28:20.203 "trsvcid": "$NVMF_PORT", 00:28:20.203 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.203 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.203 "hdgst": ${hdgst:-false}, 00:28:20.203 "ddgst": ${ddgst:-false} 00:28:20.203 }, 00:28:20.203 "method": "bdev_nvme_attach_controller" 00:28:20.203 } 00:28:20.203 EOF 00:28:20.203 )") 00:28:20.203 15:51:59 -- nvmf/common.sh@542 -- # cat 00:28:20.203 15:51:59 -- nvmf/common.sh@544 -- # jq . 00:28:20.203 15:51:59 -- nvmf/common.sh@545 -- # IFS=, 00:28:20.203 15:51:59 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:20.203 "params": { 00:28:20.203 "name": "Nvme0", 00:28:20.203 "trtype": "tcp", 00:28:20.203 "traddr": "10.0.0.2", 00:28:20.203 "adrfam": "ipv4", 00:28:20.203 "trsvcid": "4420", 00:28:20.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:20.203 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:20.203 "hdgst": false, 00:28:20.203 "ddgst": false 00:28:20.203 }, 00:28:20.203 "method": "bdev_nvme_attach_controller" 00:28:20.203 },{ 00:28:20.203 "params": { 00:28:20.203 "name": "Nvme1", 00:28:20.203 "trtype": "tcp", 00:28:20.203 "traddr": "10.0.0.2", 00:28:20.203 "adrfam": "ipv4", 00:28:20.203 "trsvcid": "4420", 00:28:20.203 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.203 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:20.203 "hdgst": false, 00:28:20.203 "ddgst": false 00:28:20.203 }, 00:28:20.203 "method": "bdev_nvme_attach_controller" 00:28:20.203 },{ 00:28:20.203 "params": { 00:28:20.203 "name": "Nvme2", 00:28:20.203 "trtype": "tcp", 00:28:20.203 "traddr": "10.0.0.2", 00:28:20.203 "adrfam": "ipv4", 00:28:20.203 "trsvcid": "4420", 00:28:20.203 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:20.204 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:20.204 "hdgst": false, 00:28:20.204 "ddgst": false 00:28:20.204 }, 00:28:20.204 "method": "bdev_nvme_attach_controller" 00:28:20.204 }' 00:28:20.204 15:51:59 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:20.204 15:51:59 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:20.204 15:51:59 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:20.204 15:51:59 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:20.204 15:51:59 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:20.204 15:51:59 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:20.204 15:51:59 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:20.204 15:51:59 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:20.204 15:51:59 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:20.204 15:51:59 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:20.480 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:20.480 ... 00:28:20.480 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:20.480 ... 00:28:20.480 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:20.480 ... 00:28:20.480 fio-3.35 00:28:20.480 Starting 24 threads 00:28:20.480 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.415 [2024-07-10 15:52:00.439359] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:21.415 [2024-07-10 15:52:00.439446] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:31.382 00:28:31.382 filename0: (groupid=0, jobs=1): err= 0: pid=2248553: Wed Jul 10 15:52:10 2024 00:28:31.382 read: IOPS=493, BW=1973KiB/s (2021kB/s)(19.3MiB/10022msec) 00:28:31.382 slat (usec): min=6, max=360, avg=38.41, stdev=12.30 00:28:31.382 clat (usec): min=9015, max=43633, avg=32104.48, stdev=3424.82 00:28:31.382 lat (usec): min=9060, max=43654, avg=32142.89, stdev=3424.90 00:28:31.382 clat percentiles (usec): 00:28:31.382 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:31.382 | 30.00th=[30802], 40.00th=[30802], 50.00th=[31065], 60.00th=[31327], 00:28:31.382 | 70.00th=[31327], 80.00th=[31851], 90.00th=[39060], 95.00th=[39584], 00:28:31.382 | 99.00th=[40633], 99.50th=[42206], 99.90th=[43254], 99.95th=[43779], 00:28:31.382 | 99.99th=[43779] 00:28:31.382 bw ( KiB/s): min= 1536, max= 2052, per=4.21%, avg=1971.40, stdev=157.70, samples=20 00:28:31.382 iops : min= 384, max= 513, avg=492.85, stdev=39.42, samples=20 00:28:31.382 lat (msec) : 10=0.32%, 20=0.32%, 50=99.35% 00:28:31.382 cpu : usr=89.70%, sys=4.94%, ctx=366, majf=0, minf=9 00:28:31.382 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:31.382 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.382 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.382 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.382 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.382 filename0: (groupid=0, jobs=1): err= 0: pid=2248554: Wed Jul 10 15:52:10 2024 00:28:31.382 read: IOPS=491, BW=1964KiB/s (2011kB/s)(19.2MiB/10003msec) 00:28:31.382 slat (usec): min=6, max=114, avg=39.51, stdev=13.07 00:28:31.383 clat (usec): min=18484, max=58274, avg=32212.13, stdev=3453.96 00:28:31.383 lat (usec): min=18541, max=58330, avg=32251.64, stdev=3452.41 00:28:31.383 clat percentiles (usec): 00:28:31.383 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:31.383 | 30.00th=[30802], 40.00th=[30802], 50.00th=[31065], 60.00th=[31065], 00:28:31.383 | 70.00th=[31327], 80.00th=[31851], 90.00th=[39060], 95.00th=[39584], 00:28:31.383 | 99.00th=[42206], 99.50th=[42730], 99.90th=[57934], 99.95th=[57934], 00:28:31.383 | 99.99th=[58459] 00:28:31.383 bw ( KiB/s): min= 1536, max= 2052, per=4.17%, avg=1954.05, stdev=164.21, samples=19 00:28:31.383 iops : min= 384, max= 513, avg=488.47, stdev=41.13, samples=19 00:28:31.383 lat (msec) : 20=0.22%, 50=99.45%, 100=0.33% 00:28:31.383 cpu : usr=95.66%, sys=2.27%, ctx=42, majf=0, minf=9 00:28:31.383 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:31.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.383 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.383 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.383 filename0: (groupid=0, jobs=1): err= 0: pid=2248555: Wed Jul 10 15:52:10 2024 00:28:31.383 read: IOPS=488, BW=1954KiB/s (2001kB/s)(19.1MiB/10008msec) 00:28:31.383 slat (usec): min=13, max=165, avg=50.35, stdev=27.89 00:28:31.383 clat (usec): min=8533, max=67930, avg=32348.83, stdev=4281.90 00:28:31.383 lat (usec): min=8574, max=67943, avg=32399.19, stdev=4280.12 00:28:31.383 clat percentiles (usec): 00:28:31.383 | 1.00th=[21103], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:31.383 | 30.00th=[30802], 40.00th=[30802], 50.00th=[31065], 60.00th=[31065], 00:28:31.383 | 70.00th=[31327], 80.00th=[32375], 90.00th=[39060], 95.00th=[39584], 00:28:31.383 | 99.00th=[46400], 99.50th=[47973], 99.90th=[56361], 99.95th=[56361], 00:28:31.383 | 99.99th=[67634] 00:28:31.383 bw ( KiB/s): min= 1536, max= 2064, per=4.14%, avg=1938.53, stdev=161.79, samples=19 00:28:31.383 iops : min= 384, max= 516, avg=484.63, stdev=40.45, samples=19 00:28:31.383 lat (msec) : 10=0.29%, 20=0.63%, 50=98.59%, 100=0.49% 00:28:31.383 cpu : usr=98.58%, sys=0.97%, ctx=10, majf=0, minf=9 00:28:31.383 IO depths : 1=2.0%, 2=7.7%, 4=23.6%, 8=55.8%, 16=10.9%, 32=0.0%, >=64=0.0% 00:28:31.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.383 complete : 0=0.0%, 4=94.1%, 8=0.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.383 issued rwts: total=4890,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.383 filename0: (groupid=0, jobs=1): err= 0: pid=2248556: Wed Jul 10 15:52:10 2024 00:28:31.383 read: IOPS=490, BW=1963KiB/s (2011kB/s)(19.2MiB/10021msec) 00:28:31.383 slat (usec): min=7, max=160, avg=35.28, stdev=17.63 00:28:31.383 clat (usec): min=20009, max=43668, avg=32252.96, stdev=3099.06 00:28:31.383 lat (usec): min=20050, max=43700, avg=32288.23, stdev=3099.80 00:28:31.383 clat percentiles (usec): 00:28:31.383 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:31.383 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31065], 60.00th=[31327], 00:28:31.383 | 70.00th=[31327], 80.00th=[31851], 90.00th=[39060], 95.00th=[39584], 00:28:31.383 | 99.00th=[41681], 99.50th=[42730], 99.90th=[43779], 99.95th=[43779], 00:28:31.383 | 99.99th=[43779] 00:28:31.383 bw ( KiB/s): min= 1536, max= 2176, per=4.20%, avg=1964.80, stdev=167.54, samples=20 00:28:31.383 iops : min= 384, max= 544, avg=491.20, stdev=41.88, samples=20 00:28:31.383 lat (msec) : 50=100.00% 00:28:31.383 cpu : usr=98.39%, sys=1.17%, ctx=20, majf=0, minf=9 00:28:31.383 IO depths : 1=5.5%, 2=11.6%, 4=24.6%, 8=51.3%, 16=7.0%, 32=0.0%, >=64=0.0% 00:28:31.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.383 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.383 issued rwts: total=4919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.383 filename0: (groupid=0, jobs=1): err= 0: pid=2248557: Wed Jul 10 15:52:10 2024 00:28:31.383 read: IOPS=490, BW=1963KiB/s (2010kB/s)(19.2MiB/10021msec) 00:28:31.383 slat (usec): min=7, max=149, avg=37.16, stdev=18.03 00:28:31.383 clat (usec): min=20010, max=44328, avg=32228.63, stdev=3108.45 00:28:31.383 lat (usec): min=20063, max=44358, avg=32265.78, stdev=3108.49 00:28:31.383 clat percentiles (usec): 00:28:31.383 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:31.383 | 30.00th=[30802], 40.00th=[30802], 50.00th=[31065], 60.00th=[31327], 00:28:31.383 | 70.00th=[31327], 80.00th=[32113], 90.00th=[39060], 95.00th=[39584], 00:28:31.383 | 99.00th=[41681], 99.50th=[42730], 99.90th=[43254], 99.95th=[43779], 00:28:31.383 | 99.99th=[44303] 00:28:31.383 bw ( KiB/s): min= 1536, max= 2176, per=4.20%, avg=1964.80, stdev=167.54, samples=20 00:28:31.383 iops : min= 384, max= 544, avg=491.20, stdev=41.88, samples=20 00:28:31.383 lat (msec) : 50=100.00% 00:28:31.383 cpu : usr=98.56%, sys=1.03%, ctx=13, majf=0, minf=9 00:28:31.383 IO depths : 1=5.3%, 2=11.2%, 4=23.7%, 8=52.6%, 16=7.2%, 32=0.0%, >=64=0.0% 00:28:31.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.383 complete : 0=0.0%, 4=93.8%, 8=0.4%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.383 issued rwts: total=4917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.383 filename0: (groupid=0, jobs=1): err= 0: pid=2248558: Wed Jul 10 15:52:10 2024 00:28:31.383 read: IOPS=490, BW=1963KiB/s (2011kB/s)(19.2MiB/10021msec) 00:28:31.383 slat (usec): min=7, max=144, avg=40.02, stdev=17.53 00:28:31.383 clat (usec): min=20062, max=45403, avg=32203.60, stdev=3102.65 00:28:31.383 lat (usec): min=20109, max=45433, avg=32243.62, stdev=3102.07 00:28:31.383 clat percentiles (usec): 00:28:31.383 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:31.383 | 30.00th=[30802], 40.00th=[30802], 50.00th=[31065], 60.00th=[31327], 00:28:31.383 | 70.00th=[31327], 80.00th=[31851], 90.00th=[39060], 95.00th=[39584], 00:28:31.383 | 99.00th=[41681], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:28:31.383 | 99.99th=[45351] 00:28:31.383 bw ( KiB/s): min= 1536, max= 2176, per=4.20%, avg=1964.80, stdev=167.54, samples=20 00:28:31.383 iops : min= 384, max= 544, avg=491.20, stdev=41.88, samples=20 00:28:31.383 lat (msec) : 50=100.00% 00:28:31.383 cpu : usr=98.67%, sys=0.93%, ctx=20, majf=0, minf=9 00:28:31.383 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:31.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.383 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.383 issued rwts: total=4919,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.383 filename0: (groupid=0, jobs=1): err= 0: pid=2248559: Wed Jul 10 15:52:10 2024 00:28:31.383 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.5MiB/10029msec) 00:28:31.383 slat (usec): min=6, max=155, avg=27.56, stdev=22.50 00:28:31.383 clat (usec): min=10386, max=43257, avg=31926.42, stdev=4042.91 00:28:31.383 lat (usec): min=10394, max=43273, avg=31953.99, stdev=4041.81 00:28:31.383 clat percentiles (usec): 00:28:31.383 | 1.00th=[11600], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:31.383 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31065], 60.00th=[31327], 00:28:31.383 | 70.00th=[31327], 80.00th=[31851], 90.00th=[39060], 95.00th=[39584], 00:28:31.383 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:28:31.383 | 99.99th=[43254] 00:28:31.383 bw ( KiB/s): min= 1536, max= 2272, per=4.25%, avg=1988.80, stdev=170.68, samples=20 00:28:31.383 iops : min= 384, max= 568, avg=497.20, stdev=42.67, samples=20 00:28:31.383 lat (msec) : 20=1.52%, 50=98.48% 00:28:31.383 cpu : usr=98.44%, sys=1.15%, ctx=15, majf=0, minf=9 00:28:31.383 IO depths : 1=5.5%, 2=11.6%, 4=24.4%, 8=51.6%, 16=7.0%, 32=0.0%, >=64=0.0% 00:28:31.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.383 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.383 issued rwts: total=4988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.383 filename0: (groupid=0, jobs=1): err= 0: pid=2248560: Wed Jul 10 15:52:10 2024 00:28:31.383 read: IOPS=490, BW=1960KiB/s (2008kB/s)(19.2MiB/10018msec) 00:28:31.383 slat (usec): min=7, max=153, avg=40.69, stdev=18.68 00:28:31.383 clat (usec): min=13029, max=70688, avg=32289.94, stdev=3887.76 00:28:31.383 lat (usec): min=13073, max=70709, avg=32330.63, stdev=3888.09 00:28:31.383 clat percentiles (usec): 00:28:31.383 | 1.00th=[28967], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:31.383 | 30.00th=[30802], 40.00th=[30802], 50.00th=[31065], 60.00th=[31327], 00:28:31.383 | 70.00th=[31327], 80.00th=[31851], 90.00th=[39060], 95.00th=[39584], 00:28:31.383 | 99.00th=[42206], 99.50th=[43254], 99.90th=[70779], 99.95th=[70779], 00:28:31.383 | 99.99th=[70779] 00:28:31.383 bw ( KiB/s): min= 1536, max= 2048, per=4.18%, avg=1956.35, stdev=170.72, samples=20 00:28:31.383 iops : min= 384, max= 512, avg=489.05, stdev=42.67, samples=20 00:28:31.383 lat (msec) : 20=0.18%, 50=99.41%, 100=0.41% 00:28:31.383 cpu : usr=98.44%, sys=1.15%, ctx=14, majf=0, minf=9 00:28:31.383 IO depths : 1=3.6%, 2=9.9%, 4=24.9%, 8=52.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:28:31.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.383 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.383 issued rwts: total=4910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.383 filename1: (groupid=0, jobs=1): err= 0: pid=2248561: Wed Jul 10 15:52:10 2024 00:28:31.383 read: IOPS=492, BW=1969KiB/s (2017kB/s)(19.2MiB/10009msec) 00:28:31.383 slat (usec): min=9, max=121, avg=37.48, stdev=10.25 00:28:31.383 clat (usec): min=8083, max=54786, avg=32148.46, stdev=3614.21 00:28:31.383 lat (usec): min=8109, max=54805, avg=32185.94, stdev=3613.25 00:28:31.383 clat percentiles (usec): 00:28:31.383 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30540], 20.00th=[30540], 00:28:31.383 | 30.00th=[30802], 40.00th=[30802], 50.00th=[31065], 60.00th=[31065], 00:28:31.383 | 70.00th=[31327], 80.00th=[31851], 90.00th=[39060], 95.00th=[39584], 00:28:31.383 | 99.00th=[41681], 99.50th=[42730], 99.90th=[54789], 99.95th=[54789], 00:28:31.383 | 99.99th=[54789] 00:28:31.383 bw ( KiB/s): min= 1536, max= 2052, per=4.17%, avg=1953.89, stdev=164.51, samples=19 00:28:31.383 iops : min= 384, max= 513, avg=488.47, stdev=41.13, samples=19 00:28:31.383 lat (msec) : 10=0.32%, 20=0.32%, 50=99.03%, 100=0.32% 00:28:31.383 cpu : usr=98.41%, sys=1.19%, ctx=15, majf=0, minf=9 00:28:31.383 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:31.383 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.383 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.383 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.383 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.383 filename1: (groupid=0, jobs=1): err= 0: pid=2248562: Wed Jul 10 15:52:10 2024 00:28:31.383 read: IOPS=444, BW=1776KiB/s (1819kB/s)(17.4MiB/10008msec) 00:28:31.384 slat (usec): min=7, max=116, avg=33.60, stdev=14.42 00:28:31.384 clat (usec): min=10034, max=62286, avg=35750.73, stdev=8221.40 00:28:31.384 lat (usec): min=10058, max=62317, avg=35784.33, stdev=8215.53 00:28:31.384 clat percentiles (usec): 00:28:31.384 | 1.00th=[30016], 5.00th=[30540], 10.00th=[30540], 20.00th=[30802], 00:28:31.384 | 30.00th=[31065], 40.00th=[31065], 50.00th=[31327], 60.00th=[31589], 00:28:31.384 | 70.00th=[33424], 80.00th=[44303], 90.00th=[46400], 95.00th=[56886], 00:28:31.384 | 99.00th=[58459], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:28:31.384 | 99.99th=[62129] 00:28:31.384 bw ( KiB/s): min= 1152, max= 2052, per=3.80%, avg=1778.89, stdev=331.13, samples=19 00:28:31.384 iops : min= 288, max= 513, avg=444.68, stdev=82.78, samples=19 00:28:31.384 lat (msec) : 20=0.32%, 50=91.18%, 100=8.51% 00:28:31.384 cpu : usr=95.31%, sys=2.44%, ctx=61, majf=0, minf=11 00:28:31.384 IO depths : 1=2.7%, 2=6.9%, 4=22.9%, 8=57.7%, 16=9.8%, 32=0.0%, >=64=0.0% 00:28:31.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.384 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.384 issued rwts: total=4444,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.384 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.384 filename1: (groupid=0, jobs=1): err= 0: pid=2248563: Wed Jul 10 15:52:10 2024 00:28:31.384 read: IOPS=492, BW=1968KiB/s (2016kB/s)(19.3MiB/10020msec) 00:28:31.384 slat (usec): min=8, max=148, avg=39.26, stdev=18.06 00:28:31.384 clat (usec): min=5782, max=47848, avg=32173.81, stdev=3301.86 00:28:31.384 lat (usec): min=5813, max=47862, avg=32213.07, stdev=3300.07 00:28:31.384 clat percentiles (usec): 00:28:31.384 | 1.00th=[28967], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:31.384 | 30.00th=[30802], 40.00th=[30802], 50.00th=[31065], 60.00th=[31327], 00:28:31.384 | 70.00th=[31327], 80.00th=[32113], 90.00th=[39060], 95.00th=[39584], 00:28:31.384 | 99.00th=[42206], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:28:31.384 | 99.99th=[47973] 00:28:31.384 bw ( KiB/s): min= 1536, max= 2176, per=4.20%, avg=1966.40, stdev=168.76, samples=20 00:28:31.384 iops : min= 384, max= 544, avg=491.60, stdev=42.19, samples=20 00:28:31.384 lat (msec) : 10=0.12%, 20=0.10%, 50=99.78% 00:28:31.384 cpu : usr=95.76%, sys=2.39%, ctx=154, majf=0, minf=9 00:28:31.384 IO depths : 1=5.4%, 2=11.3%, 4=24.2%, 8=52.0%, 16=7.1%, 32=0.0%, >=64=0.0% 00:28:31.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.384 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.384 issued rwts: total=4931,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.384 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.384 filename1: (groupid=0, jobs=1): err= 0: pid=2248564: Wed Jul 10 15:52:10 2024 00:28:31.384 read: IOPS=491, BW=1965KiB/s (2012kB/s)(19.3MiB/10068msec) 00:28:31.384 slat (usec): min=4, max=153, avg=47.06, stdev=23.85 00:28:31.384 clat (usec): min=8973, max=67246, avg=31918.05, stdev=3783.19 00:28:31.384 lat (usec): min=9008, max=67274, avg=31965.11, stdev=3782.32 00:28:31.384 clat percentiles (usec): 00:28:31.384 | 1.00th=[28705], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:31.384 | 30.00th=[30540], 40.00th=[30802], 50.00th=[30802], 60.00th=[31065], 00:28:31.384 | 70.00th=[31327], 80.00th=[31851], 90.00th=[39060], 95.00th=[39584], 00:28:31.384 | 99.00th=[40633], 99.50th=[42206], 99.90th=[43254], 99.95th=[43779], 00:28:31.384 | 99.99th=[67634] 00:28:31.384 bw ( KiB/s): min= 1536, max= 2176, per=4.22%, avg=1977.60, stdev=163.37, samples=20 00:28:31.384 iops : min= 384, max= 544, avg=494.40, stdev=40.84, samples=20 00:28:31.384 lat (msec) : 10=0.32%, 20=0.65%, 50=98.99%, 100=0.04% 00:28:31.384 cpu : usr=98.45%, sys=1.11%, ctx=9, majf=0, minf=9 00:28:31.384 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:28:31.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.384 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.384 issued rwts: total=4946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.384 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.384 filename1: (groupid=0, jobs=1): err= 0: pid=2248565: Wed Jul 10 15:52:10 2024 00:28:31.384 read: IOPS=491, BW=1965KiB/s (2012kB/s)(19.2MiB/10010msec) 00:28:31.384 slat (nsec): min=4873, max=78833, avg=33967.82, stdev=12619.71 00:28:31.384 clat (usec): min=17666, max=62318, avg=32283.19, stdev=3673.85 00:28:31.384 lat (usec): min=17705, max=62332, avg=32317.16, stdev=3673.43 00:28:31.384 clat percentiles (usec): 00:28:31.384 | 1.00th=[28705], 5.00th=[30016], 10.00th=[30278], 20.00th=[30802], 00:28:31.384 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31065], 60.00th=[31327], 00:28:31.384 | 70.00th=[31327], 80.00th=[32113], 90.00th=[39060], 95.00th=[39584], 00:28:31.384 | 99.00th=[42206], 99.50th=[43779], 99.90th=[62129], 99.95th=[62129], 00:28:31.384 | 99.99th=[62129] 00:28:31.384 bw ( KiB/s): min= 1536, max= 2160, per=4.20%, avg=1966.15, stdev=169.50, samples=20 00:28:31.384 iops : min= 384, max= 540, avg=491.50, stdev=42.33, samples=20 00:28:31.384 lat (msec) : 20=0.45%, 50=99.23%, 100=0.33% 00:28:31.384 cpu : usr=94.58%, sys=2.78%, ctx=42, majf=0, minf=9 00:28:31.384 IO depths : 1=4.2%, 2=10.0%, 4=23.5%, 8=53.6%, 16=8.6%, 32=0.0%, >=64=0.0% 00:28:31.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.384 complete : 0=0.0%, 4=93.9%, 8=0.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.384 issued rwts: total=4918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.384 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.384 filename1: (groupid=0, jobs=1): err= 0: pid=2248566: Wed Jul 10 15:52:10 2024 00:28:31.384 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10007msec) 00:28:31.384 slat (usec): min=8, max=141, avg=39.71, stdev=16.35 00:28:31.384 clat (usec): min=16438, max=62361, avg=32349.98, stdev=3778.56 00:28:31.384 lat (usec): min=16460, max=62401, avg=32389.69, stdev=3777.96 00:28:31.384 clat percentiles (usec): 00:28:31.384 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:31.384 | 30.00th=[30802], 40.00th=[30802], 50.00th=[31065], 60.00th=[31065], 00:28:31.384 | 70.00th=[31327], 80.00th=[32113], 90.00th=[39060], 95.00th=[39584], 00:28:31.384 | 99.00th=[43254], 99.50th=[52167], 99.90th=[62129], 99.95th=[62129], 00:28:31.384 | 99.99th=[62129] 00:28:31.384 bw ( KiB/s): min= 1536, max= 2048, per=4.16%, avg=1946.95, stdev=162.91, samples=19 00:28:31.384 iops : min= 384, max= 512, avg=486.74, stdev=40.73, samples=19 00:28:31.384 lat (msec) : 20=0.37%, 50=98.98%, 100=0.65% 00:28:31.384 cpu : usr=98.27%, sys=1.26%, ctx=50, majf=0, minf=9 00:28:31.384 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.4%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:31.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.384 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.384 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.384 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.384 filename1: (groupid=0, jobs=1): err= 0: pid=2248567: Wed Jul 10 15:52:10 2024 00:28:31.384 read: IOPS=491, BW=1964KiB/s (2011kB/s)(19.2MiB/10003msec) 00:28:31.384 slat (usec): min=8, max=677, avg=37.86, stdev=20.13 00:28:31.384 clat (usec): min=28734, max=45743, avg=32243.21, stdev=3070.40 00:28:31.384 lat (usec): min=28757, max=45763, avg=32281.07, stdev=3070.03 00:28:31.384 clat percentiles (usec): 00:28:31.384 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30540], 20.00th=[30540], 00:28:31.384 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31065], 60.00th=[31327], 00:28:31.384 | 70.00th=[31327], 80.00th=[31851], 90.00th=[39060], 95.00th=[39584], 00:28:31.384 | 99.00th=[42206], 99.50th=[43254], 99.90th=[43254], 99.95th=[43779], 00:28:31.384 | 99.99th=[45876] 00:28:31.384 bw ( KiB/s): min= 1536, max= 2176, per=4.19%, avg=1960.42, stdev=170.95, samples=19 00:28:31.384 iops : min= 384, max= 544, avg=490.11, stdev=42.74, samples=19 00:28:31.384 lat (msec) : 50=100.00% 00:28:31.384 cpu : usr=89.36%, sys=4.95%, ctx=385, majf=0, minf=9 00:28:31.384 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:28:31.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.384 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.384 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.384 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.384 filename1: (groupid=0, jobs=1): err= 0: pid=2248568: Wed Jul 10 15:52:10 2024 00:28:31.384 read: IOPS=493, BW=1974KiB/s (2021kB/s)(19.3MiB/10020msec) 00:28:31.384 slat (usec): min=4, max=149, avg=30.17, stdev=15.31 00:28:31.384 clat (usec): min=9676, max=47160, avg=32181.98, stdev=3538.21 00:28:31.384 lat (usec): min=9698, max=47180, avg=32212.16, stdev=3539.05 00:28:31.384 clat percentiles (usec): 00:28:31.384 | 1.00th=[28443], 5.00th=[30016], 10.00th=[30278], 20.00th=[30802], 00:28:31.384 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31065], 60.00th=[31327], 00:28:31.384 | 70.00th=[31589], 80.00th=[32375], 90.00th=[39060], 95.00th=[39584], 00:28:31.384 | 99.00th=[41157], 99.50th=[42730], 99.90th=[46924], 99.95th=[46924], 00:28:31.384 | 99.99th=[46924] 00:28:31.384 bw ( KiB/s): min= 1536, max= 2104, per=4.21%, avg=1971.40, stdev=157.46, samples=20 00:28:31.384 iops : min= 384, max= 526, avg=492.85, stdev=39.37, samples=20 00:28:31.384 lat (msec) : 10=0.26%, 20=0.53%, 50=99.21% 00:28:31.384 cpu : usr=95.34%, sys=2.31%, ctx=186, majf=0, minf=9 00:28:31.384 IO depths : 1=4.7%, 2=10.3%, 4=22.3%, 8=54.9%, 16=7.9%, 32=0.0%, >=64=0.0% 00:28:31.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.384 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.384 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.384 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.384 filename2: (groupid=0, jobs=1): err= 0: pid=2248569: Wed Jul 10 15:52:10 2024 00:28:31.384 read: IOPS=492, BW=1969KiB/s (2016kB/s)(19.2MiB/10007msec) 00:28:31.384 slat (usec): min=7, max=123, avg=33.51, stdev=18.87 00:28:31.384 clat (usec): min=8801, max=53675, avg=32266.31, stdev=3722.90 00:28:31.384 lat (usec): min=8832, max=53731, avg=32299.82, stdev=3722.74 00:28:31.384 clat percentiles (usec): 00:28:31.384 | 1.00th=[28181], 5.00th=[30016], 10.00th=[30278], 20.00th=[30802], 00:28:31.384 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31065], 60.00th=[31327], 00:28:31.384 | 70.00th=[31589], 80.00th=[32113], 90.00th=[39060], 95.00th=[39584], 00:28:31.384 | 99.00th=[42730], 99.50th=[43779], 99.90th=[53740], 99.95th=[53740], 00:28:31.384 | 99.99th=[53740] 00:28:31.384 bw ( KiB/s): min= 1536, max= 2064, per=4.17%, avg=1953.68, stdev=166.01, samples=19 00:28:31.384 iops : min= 384, max= 516, avg=488.42, stdev=41.50, samples=19 00:28:31.384 lat (msec) : 10=0.28%, 20=0.49%, 50=98.90%, 100=0.32% 00:28:31.384 cpu : usr=98.63%, sys=0.98%, ctx=15, majf=0, minf=9 00:28:31.384 IO depths : 1=0.2%, 2=4.8%, 4=18.5%, 8=62.6%, 16=13.9%, 32=0.0%, >=64=0.0% 00:28:31.384 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.384 complete : 0=0.0%, 4=93.1%, 8=2.8%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.384 issued rwts: total=4926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.384 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.384 filename2: (groupid=0, jobs=1): err= 0: pid=2248570: Wed Jul 10 15:52:10 2024 00:28:31.384 read: IOPS=490, BW=1963KiB/s (2010kB/s)(19.2MiB/10021msec) 00:28:31.384 slat (usec): min=8, max=148, avg=37.61, stdev=13.86 00:28:31.384 clat (usec): min=20051, max=46990, avg=32232.11, stdev=3120.34 00:28:31.384 lat (usec): min=20083, max=47031, avg=32269.72, stdev=3119.37 00:28:31.384 clat percentiles (usec): 00:28:31.385 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30540], 20.00th=[30540], 00:28:31.385 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31065], 60.00th=[31327], 00:28:31.385 | 70.00th=[31327], 80.00th=[31851], 90.00th=[39060], 95.00th=[39584], 00:28:31.385 | 99.00th=[42206], 99.50th=[43254], 99.90th=[43779], 99.95th=[45351], 00:28:31.385 | 99.99th=[46924] 00:28:31.385 bw ( KiB/s): min= 1536, max= 2176, per=4.20%, avg=1964.80, stdev=167.54, samples=20 00:28:31.385 iops : min= 384, max= 544, avg=491.20, stdev=41.88, samples=20 00:28:31.385 lat (msec) : 50=100.00% 00:28:31.385 cpu : usr=94.15%, sys=2.88%, ctx=156, majf=0, minf=9 00:28:31.385 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:28:31.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.385 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.385 issued rwts: total=4918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.385 filename2: (groupid=0, jobs=1): err= 0: pid=2248571: Wed Jul 10 15:52:10 2024 00:28:31.385 read: IOPS=492, BW=1970KiB/s (2017kB/s)(19.2MiB/10008msec) 00:28:31.385 slat (usec): min=9, max=142, avg=41.55, stdev=17.13 00:28:31.385 clat (usec): min=9291, max=53032, avg=32095.39, stdev=3569.30 00:28:31.385 lat (usec): min=9311, max=53072, avg=32136.94, stdev=3569.08 00:28:31.385 clat percentiles (usec): 00:28:31.385 | 1.00th=[29492], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:31.385 | 30.00th=[30802], 40.00th=[30802], 50.00th=[31065], 60.00th=[31065], 00:28:31.385 | 70.00th=[31327], 80.00th=[31851], 90.00th=[39060], 95.00th=[39584], 00:28:31.385 | 99.00th=[41157], 99.50th=[43254], 99.90th=[52691], 99.95th=[53216], 00:28:31.385 | 99.99th=[53216] 00:28:31.385 bw ( KiB/s): min= 1536, max= 2048, per=4.17%, avg=1953.68, stdev=164.38, samples=19 00:28:31.385 iops : min= 384, max= 512, avg=488.42, stdev=41.09, samples=19 00:28:31.385 lat (msec) : 10=0.32%, 20=0.32%, 50=99.03%, 100=0.32% 00:28:31.385 cpu : usr=97.14%, sys=1.89%, ctx=131, majf=0, minf=9 00:28:31.385 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:31.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.385 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.385 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.385 filename2: (groupid=0, jobs=1): err= 0: pid=2248572: Wed Jul 10 15:52:10 2024 00:28:31.385 read: IOPS=500, BW=2000KiB/s (2048kB/s)(19.6MiB/10014msec) 00:28:31.385 slat (nsec): min=7748, max=83687, avg=13027.14, stdev=9536.01 00:28:31.385 clat (usec): min=3788, max=47573, avg=31887.15, stdev=6191.73 00:28:31.385 lat (usec): min=3796, max=47582, avg=31900.17, stdev=6190.58 00:28:31.385 clat percentiles (usec): 00:28:31.385 | 1.00th=[ 9110], 5.00th=[18220], 10.00th=[30016], 20.00th=[30802], 00:28:31.385 | 30.00th=[31065], 40.00th=[31065], 50.00th=[31327], 60.00th=[31327], 00:28:31.385 | 70.00th=[31589], 80.00th=[32900], 90.00th=[39584], 95.00th=[43254], 00:28:31.385 | 99.00th=[44827], 99.50th=[45351], 99.90th=[47449], 99.95th=[47449], 00:28:31.385 | 99.99th=[47449] 00:28:31.385 bw ( KiB/s): min= 1536, max= 2232, per=4.26%, avg=1996.80, stdev=188.45, samples=20 00:28:31.385 iops : min= 384, max= 558, avg=499.20, stdev=47.11, samples=20 00:28:31.385 lat (msec) : 4=0.18%, 10=1.10%, 20=5.91%, 50=92.81% 00:28:31.385 cpu : usr=98.50%, sys=1.12%, ctx=14, majf=0, minf=9 00:28:31.385 IO depths : 1=3.5%, 2=8.7%, 4=21.1%, 8=57.6%, 16=9.0%, 32=0.0%, >=64=0.0% 00:28:31.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.385 complete : 0=0.0%, 4=93.4%, 8=0.9%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.385 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.385 filename2: (groupid=0, jobs=1): err= 0: pid=2248573: Wed Jul 10 15:52:10 2024 00:28:31.385 read: IOPS=493, BW=1976KiB/s (2023kB/s)(19.3MiB/10021msec) 00:28:31.385 slat (usec): min=5, max=157, avg=33.22, stdev=16.41 00:28:31.385 clat (usec): min=8511, max=61504, avg=32100.09, stdev=3584.88 00:28:31.385 lat (usec): min=8533, max=61569, avg=32133.31, stdev=3586.74 00:28:31.385 clat percentiles (usec): 00:28:31.385 | 1.00th=[27657], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:31.385 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31065], 60.00th=[31327], 00:28:31.385 | 70.00th=[31327], 80.00th=[31851], 90.00th=[39060], 95.00th=[39584], 00:28:31.385 | 99.00th=[41157], 99.50th=[42730], 99.90th=[46400], 99.95th=[46924], 00:28:31.385 | 99.99th=[61604] 00:28:31.385 bw ( KiB/s): min= 1536, max= 2160, per=4.22%, avg=1973.60, stdev=163.27, samples=20 00:28:31.385 iops : min= 384, max= 540, avg=493.40, stdev=40.82, samples=20 00:28:31.385 lat (msec) : 10=0.32%, 20=0.65%, 50=98.99%, 100=0.04% 00:28:31.385 cpu : usr=93.10%, sys=3.29%, ctx=64, majf=0, minf=9 00:28:31.385 IO depths : 1=4.7%, 2=10.8%, 4=24.6%, 8=52.1%, 16=7.9%, 32=0.0%, >=64=0.0% 00:28:31.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.385 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.385 issued rwts: total=4950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.385 filename2: (groupid=0, jobs=1): err= 0: pid=2248574: Wed Jul 10 15:52:10 2024 00:28:31.385 read: IOPS=491, BW=1964KiB/s (2011kB/s)(19.2MiB/10003msec) 00:28:31.385 slat (usec): min=6, max=164, avg=37.56, stdev=22.47 00:28:31.385 clat (usec): min=17673, max=57530, avg=32238.20, stdev=3475.13 00:28:31.385 lat (usec): min=17700, max=57546, avg=32275.76, stdev=3470.89 00:28:31.385 clat percentiles (usec): 00:28:31.385 | 1.00th=[29754], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:31.385 | 30.00th=[30802], 40.00th=[30802], 50.00th=[31065], 60.00th=[31065], 00:28:31.385 | 70.00th=[31327], 80.00th=[31851], 90.00th=[39060], 95.00th=[39584], 00:28:31.385 | 99.00th=[41157], 99.50th=[43779], 99.90th=[57410], 99.95th=[57410], 00:28:31.385 | 99.99th=[57410] 00:28:31.385 bw ( KiB/s): min= 1536, max= 2052, per=4.17%, avg=1954.21, stdev=164.05, samples=19 00:28:31.385 iops : min= 384, max= 513, avg=488.47, stdev=41.13, samples=19 00:28:31.385 lat (msec) : 20=0.33%, 50=99.35%, 100=0.33% 00:28:31.385 cpu : usr=98.56%, sys=1.03%, ctx=13, majf=0, minf=9 00:28:31.385 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:28:31.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.385 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.385 issued rwts: total=4912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.385 filename2: (groupid=0, jobs=1): err= 0: pid=2248575: Wed Jul 10 15:52:10 2024 00:28:31.385 read: IOPS=492, BW=1969KiB/s (2016kB/s)(19.2MiB/10008msec) 00:28:31.385 slat (usec): min=7, max=177, avg=42.33, stdev=22.84 00:28:31.385 clat (usec): min=8886, max=56804, avg=32152.10, stdev=3972.02 00:28:31.385 lat (usec): min=8920, max=56841, avg=32194.44, stdev=3970.52 00:28:31.385 clat percentiles (usec): 00:28:31.385 | 1.00th=[21890], 5.00th=[30016], 10.00th=[30278], 20.00th=[30540], 00:28:31.385 | 30.00th=[30802], 40.00th=[30802], 50.00th=[31065], 60.00th=[31065], 00:28:31.385 | 70.00th=[31327], 80.00th=[31851], 90.00th=[39060], 95.00th=[39584], 00:28:31.385 | 99.00th=[43254], 99.50th=[54264], 99.90th=[56886], 99.95th=[56886], 00:28:31.385 | 99.99th=[56886] 00:28:31.385 bw ( KiB/s): min= 1536, max= 2064, per=4.17%, avg=1954.05, stdev=165.98, samples=19 00:28:31.385 iops : min= 384, max= 516, avg=488.47, stdev=41.53, samples=19 00:28:31.385 lat (msec) : 10=0.28%, 20=0.61%, 50=98.58%, 100=0.53% 00:28:31.385 cpu : usr=98.61%, sys=0.98%, ctx=15, majf=0, minf=9 00:28:31.385 IO depths : 1=1.7%, 2=7.8%, 4=24.8%, 8=54.8%, 16=10.9%, 32=0.0%, >=64=0.0% 00:28:31.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.385 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.385 issued rwts: total=4926,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.385 filename2: (groupid=0, jobs=1): err= 0: pid=2248576: Wed Jul 10 15:52:10 2024 00:28:31.385 read: IOPS=490, BW=1963KiB/s (2010kB/s)(19.2MiB/10020msec) 00:28:31.385 slat (usec): min=7, max=182, avg=40.05, stdev=20.63 00:28:31.385 clat (usec): min=19965, max=48083, avg=32269.66, stdev=3142.34 00:28:31.385 lat (usec): min=20018, max=48105, avg=32309.70, stdev=3138.41 00:28:31.385 clat percentiles (usec): 00:28:31.385 | 1.00th=[29754], 5.00th=[30278], 10.00th=[30540], 20.00th=[30540], 00:28:31.385 | 30.00th=[30802], 40.00th=[31065], 50.00th=[31065], 60.00th=[31327], 00:28:31.385 | 70.00th=[31327], 80.00th=[31851], 90.00th=[39060], 95.00th=[39584], 00:28:31.385 | 99.00th=[42206], 99.50th=[43254], 99.90th=[43779], 99.95th=[44303], 00:28:31.385 | 99.99th=[47973] 00:28:31.385 bw ( KiB/s): min= 1536, max= 2176, per=4.20%, avg=1964.00, stdev=167.16, samples=20 00:28:31.385 iops : min= 384, max= 544, avg=491.00, stdev=41.79, samples=20 00:28:31.385 lat (msec) : 20=0.02%, 50=99.98% 00:28:31.385 cpu : usr=98.74%, sys=0.85%, ctx=14, majf=0, minf=9 00:28:31.385 IO depths : 1=2.0%, 2=8.2%, 4=24.9%, 8=54.4%, 16=10.5%, 32=0.0%, >=64=0.0% 00:28:31.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.385 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:31.385 issued rwts: total=4918,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:31.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:28:31.385 00:28:31.385 Run status group 0 (all jobs): 00:28:31.385 READ: bw=45.7MiB/s (47.9MB/s), 1776KiB/s-2000KiB/s (1819kB/s-2048kB/s), io=460MiB (483MB), run=10003-10068msec 00:28:31.643 15:52:10 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:28:31.643 15:52:10 -- target/dif.sh@43 -- # local sub 00:28:31.643 15:52:10 -- target/dif.sh@45 -- # for sub in "$@" 00:28:31.643 15:52:10 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:31.643 15:52:10 -- target/dif.sh@36 -- # local sub_id=0 00:28:31.643 15:52:10 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:31.643 15:52:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.643 15:52:10 -- common/autotest_common.sh@10 -- # set +x 00:28:31.643 15:52:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.643 15:52:10 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:31.643 15:52:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.643 15:52:10 -- common/autotest_common.sh@10 -- # set +x 00:28:31.643 15:52:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.643 15:52:10 -- target/dif.sh@45 -- # for sub in "$@" 00:28:31.643 15:52:10 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:31.643 15:52:10 -- target/dif.sh@36 -- # local sub_id=1 00:28:31.643 15:52:10 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:31.643 15:52:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.643 15:52:10 -- common/autotest_common.sh@10 -- # set +x 00:28:31.643 15:52:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.643 15:52:10 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:31.643 15:52:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.643 15:52:10 -- common/autotest_common.sh@10 -- # set +x 00:28:31.643 15:52:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.643 15:52:10 -- target/dif.sh@45 -- # for sub in "$@" 00:28:31.643 15:52:10 -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:31.643 15:52:10 -- target/dif.sh@36 -- # local sub_id=2 00:28:31.643 15:52:10 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:31.643 15:52:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.643 15:52:10 -- common/autotest_common.sh@10 -- # set +x 00:28:31.643 15:52:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.643 15:52:10 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:31.643 15:52:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.643 15:52:10 -- common/autotest_common.sh@10 -- # set +x 00:28:31.643 15:52:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.643 15:52:10 -- target/dif.sh@115 -- # NULL_DIF=1 00:28:31.643 15:52:10 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:31.643 15:52:10 -- target/dif.sh@115 -- # numjobs=2 00:28:31.643 15:52:10 -- target/dif.sh@115 -- # iodepth=8 00:28:31.643 15:52:10 -- target/dif.sh@115 -- # runtime=5 00:28:31.643 15:52:10 -- target/dif.sh@115 -- # files=1 00:28:31.643 15:52:10 -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:31.643 15:52:10 -- target/dif.sh@28 -- # local sub 00:28:31.643 15:52:10 -- target/dif.sh@30 -- # for sub in "$@" 00:28:31.643 15:52:10 -- target/dif.sh@31 -- # create_subsystem 0 00:28:31.643 15:52:10 -- target/dif.sh@18 -- # local sub_id=0 00:28:31.643 15:52:10 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:31.643 15:52:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.643 15:52:10 -- common/autotest_common.sh@10 -- # set +x 00:28:31.643 bdev_null0 00:28:31.643 15:52:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.643 15:52:10 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:31.643 15:52:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.643 15:52:10 -- common/autotest_common.sh@10 -- # set +x 00:28:31.643 15:52:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.643 15:52:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:31.643 15:52:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.643 15:52:11 -- common/autotest_common.sh@10 -- # set +x 00:28:31.643 15:52:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.643 15:52:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:31.643 15:52:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.643 15:52:11 -- common/autotest_common.sh@10 -- # set +x 00:28:31.643 [2024-07-10 15:52:11.015381] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:31.902 15:52:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.902 15:52:11 -- target/dif.sh@30 -- # for sub in "$@" 00:28:31.902 15:52:11 -- target/dif.sh@31 -- # create_subsystem 1 00:28:31.902 15:52:11 -- target/dif.sh@18 -- # local sub_id=1 00:28:31.902 15:52:11 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:31.902 15:52:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.902 15:52:11 -- common/autotest_common.sh@10 -- # set +x 00:28:31.902 bdev_null1 00:28:31.902 15:52:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.902 15:52:11 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:31.902 15:52:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.902 15:52:11 -- common/autotest_common.sh@10 -- # set +x 00:28:31.902 15:52:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.902 15:52:11 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:31.902 15:52:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.902 15:52:11 -- common/autotest_common.sh@10 -- # set +x 00:28:31.902 15:52:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.902 15:52:11 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:31.902 15:52:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.902 15:52:11 -- common/autotest_common.sh@10 -- # set +x 00:28:31.902 15:52:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.902 15:52:11 -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:31.902 15:52:11 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:31.902 15:52:11 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:31.902 15:52:11 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:31.902 15:52:11 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:31.902 15:52:11 -- nvmf/common.sh@520 -- # config=() 00:28:31.902 15:52:11 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:31.902 15:52:11 -- target/dif.sh@82 -- # gen_fio_conf 00:28:31.902 15:52:11 -- nvmf/common.sh@520 -- # local subsystem config 00:28:31.902 15:52:11 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:31.902 15:52:11 -- target/dif.sh@54 -- # local file 00:28:31.902 15:52:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:31.902 15:52:11 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:31.902 15:52:11 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:31.902 15:52:11 -- target/dif.sh@56 -- # cat 00:28:31.902 15:52:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:31.902 { 00:28:31.902 "params": { 00:28:31.902 "name": "Nvme$subsystem", 00:28:31.902 "trtype": "$TEST_TRANSPORT", 00:28:31.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.902 "adrfam": "ipv4", 00:28:31.902 "trsvcid": "$NVMF_PORT", 00:28:31.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.902 "hdgst": ${hdgst:-false}, 00:28:31.902 "ddgst": ${ddgst:-false} 00:28:31.902 }, 00:28:31.902 "method": "bdev_nvme_attach_controller" 00:28:31.902 } 00:28:31.902 EOF 00:28:31.902 )") 00:28:31.902 15:52:11 -- common/autotest_common.sh@1320 -- # shift 00:28:31.902 15:52:11 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:31.902 15:52:11 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:31.902 15:52:11 -- nvmf/common.sh@542 -- # cat 00:28:31.902 15:52:11 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:31.902 15:52:11 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:31.902 15:52:11 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:31.902 15:52:11 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:31.902 15:52:11 -- target/dif.sh@72 -- # (( file <= files )) 00:28:31.902 15:52:11 -- target/dif.sh@73 -- # cat 00:28:31.902 15:52:11 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:31.902 15:52:11 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:31.902 { 00:28:31.902 "params": { 00:28:31.902 "name": "Nvme$subsystem", 00:28:31.902 "trtype": "$TEST_TRANSPORT", 00:28:31.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:31.902 "adrfam": "ipv4", 00:28:31.902 "trsvcid": "$NVMF_PORT", 00:28:31.902 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:31.902 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:31.902 "hdgst": ${hdgst:-false}, 00:28:31.902 "ddgst": ${ddgst:-false} 00:28:31.902 }, 00:28:31.902 "method": "bdev_nvme_attach_controller" 00:28:31.902 } 00:28:31.902 EOF 00:28:31.902 )") 00:28:31.902 15:52:11 -- target/dif.sh@72 -- # (( file++ )) 00:28:31.902 15:52:11 -- target/dif.sh@72 -- # (( file <= files )) 00:28:31.902 15:52:11 -- nvmf/common.sh@542 -- # cat 00:28:31.902 15:52:11 -- nvmf/common.sh@544 -- # jq . 00:28:31.902 15:52:11 -- nvmf/common.sh@545 -- # IFS=, 00:28:31.902 15:52:11 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:31.902 "params": { 00:28:31.902 "name": "Nvme0", 00:28:31.902 "trtype": "tcp", 00:28:31.902 "traddr": "10.0.0.2", 00:28:31.902 "adrfam": "ipv4", 00:28:31.902 "trsvcid": "4420", 00:28:31.902 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:31.902 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:31.902 "hdgst": false, 00:28:31.902 "ddgst": false 00:28:31.902 }, 00:28:31.902 "method": "bdev_nvme_attach_controller" 00:28:31.902 },{ 00:28:31.902 "params": { 00:28:31.902 "name": "Nvme1", 00:28:31.902 "trtype": "tcp", 00:28:31.902 "traddr": "10.0.0.2", 00:28:31.902 "adrfam": "ipv4", 00:28:31.902 "trsvcid": "4420", 00:28:31.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:31.902 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:31.902 "hdgst": false, 00:28:31.902 "ddgst": false 00:28:31.902 }, 00:28:31.902 "method": "bdev_nvme_attach_controller" 00:28:31.902 }' 00:28:31.902 15:52:11 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:31.902 15:52:11 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:31.902 15:52:11 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:31.902 15:52:11 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:31.902 15:52:11 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:31.902 15:52:11 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:31.902 15:52:11 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:31.902 15:52:11 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:31.902 15:52:11 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:31.902 15:52:11 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:32.160 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:32.160 ... 00:28:32.160 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:32.160 ... 00:28:32.160 fio-3.35 00:28:32.160 Starting 4 threads 00:28:32.160 EAL: No free 2048 kB hugepages reported on node 1 00:28:32.726 [2024-07-10 15:52:11.928799] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:32.726 [2024-07-10 15:52:11.928907] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:37.993 00:28:37.993 filename0: (groupid=0, jobs=1): err= 0: pid=2249994: Wed Jul 10 15:52:17 2024 00:28:37.993 read: IOPS=1892, BW=14.8MiB/s (15.5MB/s)(73.9MiB/5001msec) 00:28:37.993 slat (nsec): min=6768, max=73432, avg=17266.38, stdev=10172.12 00:28:37.993 clat (usec): min=1077, max=8263, avg=4170.62, stdev=600.68 00:28:37.993 lat (usec): min=1089, max=8283, avg=4187.89, stdev=601.32 00:28:37.993 clat percentiles (usec): 00:28:37.993 | 1.00th=[ 2835], 5.00th=[ 3490], 10.00th=[ 3621], 20.00th=[ 3752], 00:28:37.993 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4113], 60.00th=[ 4178], 00:28:37.993 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 5080], 95.00th=[ 5407], 00:28:37.993 | 99.00th=[ 6128], 99.50th=[ 6521], 99.90th=[ 7504], 99.95th=[ 7635], 00:28:37.993 | 99.99th=[ 8291] 00:28:37.993 bw ( KiB/s): min=13824, max=15680, per=24.96%, avg=15134.22, stdev=631.11, samples=9 00:28:37.993 iops : min= 1728, max= 1960, avg=1891.78, stdev=78.89, samples=9 00:28:37.993 lat (msec) : 2=0.38%, 4=36.83%, 10=62.79% 00:28:37.993 cpu : usr=95.34%, sys=4.18%, ctx=11, majf=0, minf=9 00:28:37.993 IO depths : 1=0.1%, 2=11.1%, 4=61.6%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:37.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.993 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.993 issued rwts: total=9465,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.993 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:37.993 filename0: (groupid=0, jobs=1): err= 0: pid=2249995: Wed Jul 10 15:52:17 2024 00:28:37.993 read: IOPS=1904, BW=14.9MiB/s (15.6MB/s)(74.4MiB/5002msec) 00:28:37.993 slat (nsec): min=6759, max=72636, avg=16529.40, stdev=10202.00 00:28:37.993 clat (usec): min=1361, max=7423, avg=4144.31, stdev=559.90 00:28:37.993 lat (usec): min=1368, max=7443, avg=4160.84, stdev=560.07 00:28:37.993 clat percentiles (usec): 00:28:37.993 | 1.00th=[ 2868], 5.00th=[ 3425], 10.00th=[ 3589], 20.00th=[ 3785], 00:28:37.993 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4080], 60.00th=[ 4178], 00:28:37.993 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 5014], 95.00th=[ 5342], 00:28:37.993 | 99.00th=[ 5800], 99.50th=[ 6063], 99.90th=[ 6849], 99.95th=[ 7111], 00:28:37.993 | 99.99th=[ 7439] 00:28:37.993 bw ( KiB/s): min=13760, max=16064, per=25.07%, avg=15203.56, stdev=723.14, samples=9 00:28:37.993 iops : min= 1720, max= 2008, avg=1900.44, stdev=90.39, samples=9 00:28:37.993 lat (msec) : 2=0.13%, 4=37.49%, 10=62.38% 00:28:37.993 cpu : usr=94.98%, sys=4.56%, ctx=7, majf=0, minf=9 00:28:37.993 IO depths : 1=0.1%, 2=12.3%, 4=61.2%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:37.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.993 complete : 0=0.0%, 4=91.4%, 8=8.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.993 issued rwts: total=9528,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.993 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:37.993 filename1: (groupid=0, jobs=1): err= 0: pid=2249996: Wed Jul 10 15:52:17 2024 00:28:37.993 read: IOPS=1897, BW=14.8MiB/s (15.5MB/s)(74.2MiB/5004msec) 00:28:37.993 slat (usec): min=4, max=323, avg=16.57, stdev=10.25 00:28:37.993 clat (usec): min=763, max=9624, avg=4157.43, stdev=749.98 00:28:37.993 lat (usec): min=782, max=9638, avg=4174.00, stdev=750.74 00:28:37.993 clat percentiles (usec): 00:28:37.993 | 1.00th=[ 1958], 5.00th=[ 3392], 10.00th=[ 3589], 20.00th=[ 3720], 00:28:37.993 | 30.00th=[ 3884], 40.00th=[ 3982], 50.00th=[ 4080], 60.00th=[ 4178], 00:28:37.993 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 5145], 95.00th=[ 5473], 00:28:37.993 | 99.00th=[ 6849], 99.50th=[ 7373], 99.90th=[ 8717], 99.95th=[ 9241], 00:28:37.993 | 99.99th=[ 9634] 00:28:37.993 bw ( KiB/s): min=13712, max=15904, per=25.03%, avg=15179.20, stdev=733.76, samples=10 00:28:37.993 iops : min= 1714, max= 1988, avg=1897.40, stdev=91.72, samples=10 00:28:37.993 lat (usec) : 1000=0.20% 00:28:37.993 lat (msec) : 2=1.00%, 4=41.39%, 10=57.41% 00:28:37.993 cpu : usr=67.22%, sys=14.59%, ctx=156, majf=0, minf=9 00:28:37.993 IO depths : 1=0.7%, 2=10.4%, 4=62.3%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:37.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.993 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.993 issued rwts: total=9495,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.993 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:37.993 filename1: (groupid=0, jobs=1): err= 0: pid=2249997: Wed Jul 10 15:52:17 2024 00:28:37.993 read: IOPS=1888, BW=14.8MiB/s (15.5MB/s)(73.8MiB/5002msec) 00:28:37.993 slat (nsec): min=4211, max=59542, avg=17900.68, stdev=8648.11 00:28:37.993 clat (usec): min=761, max=44647, avg=4177.29, stdev=1347.41 00:28:37.993 lat (usec): min=774, max=44660, avg=4195.19, stdev=1347.44 00:28:37.993 clat percentiles (usec): 00:28:37.993 | 1.00th=[ 2737], 5.00th=[ 3425], 10.00th=[ 3556], 20.00th=[ 3752], 00:28:37.993 | 30.00th=[ 3916], 40.00th=[ 3982], 50.00th=[ 4080], 60.00th=[ 4146], 00:28:37.993 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 5080], 95.00th=[ 5407], 00:28:37.993 | 99.00th=[ 6587], 99.50th=[ 6980], 99.90th=[ 9241], 99.95th=[44827], 00:28:37.993 | 99.99th=[44827] 00:28:37.993 bw ( KiB/s): min=12128, max=16160, per=24.83%, avg=15056.00, stdev=1280.75, samples=9 00:28:37.993 iops : min= 1516, max= 2020, avg=1882.00, stdev=160.09, samples=9 00:28:37.993 lat (usec) : 1000=0.10% 00:28:37.993 lat (msec) : 2=0.51%, 4=40.05%, 10=59.26%, 50=0.08% 00:28:37.993 cpu : usr=95.72%, sys=3.78%, ctx=11, majf=0, minf=9 00:28:37.993 IO depths : 1=0.2%, 2=9.9%, 4=63.3%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:37.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.993 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:37.993 issued rwts: total=9445,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:37.993 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:37.993 00:28:37.993 Run status group 0 (all jobs): 00:28:37.993 READ: bw=59.2MiB/s (62.1MB/s), 14.8MiB/s-14.9MiB/s (15.5MB/s-15.6MB/s), io=296MiB (311MB), run=5001-5004msec 00:28:37.993 15:52:17 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:37.993 15:52:17 -- target/dif.sh@43 -- # local sub 00:28:37.993 15:52:17 -- target/dif.sh@45 -- # for sub in "$@" 00:28:37.993 15:52:17 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:37.993 15:52:17 -- target/dif.sh@36 -- # local sub_id=0 00:28:37.993 15:52:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:37.993 15:52:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.993 15:52:17 -- common/autotest_common.sh@10 -- # set +x 00:28:37.993 15:52:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.993 15:52:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:37.993 15:52:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.993 15:52:17 -- common/autotest_common.sh@10 -- # set +x 00:28:37.993 15:52:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.993 15:52:17 -- target/dif.sh@45 -- # for sub in "$@" 00:28:37.993 15:52:17 -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:37.993 15:52:17 -- target/dif.sh@36 -- # local sub_id=1 00:28:37.993 15:52:17 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:37.993 15:52:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.993 15:52:17 -- common/autotest_common.sh@10 -- # set +x 00:28:37.993 15:52:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.993 15:52:17 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:37.993 15:52:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.993 15:52:17 -- common/autotest_common.sh@10 -- # set +x 00:28:37.993 15:52:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.993 00:28:37.993 real 0m24.089s 00:28:37.993 user 4m27.985s 00:28:37.993 sys 0m7.922s 00:28:37.993 15:52:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:37.993 15:52:17 -- common/autotest_common.sh@10 -- # set +x 00:28:37.993 ************************************ 00:28:37.993 END TEST fio_dif_rand_params 00:28:37.993 ************************************ 00:28:37.993 15:52:17 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:37.993 15:52:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:37.993 15:52:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:37.993 15:52:17 -- common/autotest_common.sh@10 -- # set +x 00:28:37.993 ************************************ 00:28:37.993 START TEST fio_dif_digest 00:28:37.993 ************************************ 00:28:37.993 15:52:17 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:28:37.993 15:52:17 -- target/dif.sh@123 -- # local NULL_DIF 00:28:37.993 15:52:17 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:37.993 15:52:17 -- target/dif.sh@125 -- # local hdgst ddgst 00:28:37.993 15:52:17 -- target/dif.sh@127 -- # NULL_DIF=3 00:28:37.993 15:52:17 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:37.993 15:52:17 -- target/dif.sh@127 -- # numjobs=3 00:28:37.993 15:52:17 -- target/dif.sh@127 -- # iodepth=3 00:28:37.993 15:52:17 -- target/dif.sh@127 -- # runtime=10 00:28:37.993 15:52:17 -- target/dif.sh@128 -- # hdgst=true 00:28:37.993 15:52:17 -- target/dif.sh@128 -- # ddgst=true 00:28:37.993 15:52:17 -- target/dif.sh@130 -- # create_subsystems 0 00:28:37.993 15:52:17 -- target/dif.sh@28 -- # local sub 00:28:37.993 15:52:17 -- target/dif.sh@30 -- # for sub in "$@" 00:28:37.993 15:52:17 -- target/dif.sh@31 -- # create_subsystem 0 00:28:37.993 15:52:17 -- target/dif.sh@18 -- # local sub_id=0 00:28:37.993 15:52:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:37.993 15:52:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.993 15:52:17 -- common/autotest_common.sh@10 -- # set +x 00:28:37.993 bdev_null0 00:28:37.993 15:52:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.993 15:52:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:37.993 15:52:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.993 15:52:17 -- common/autotest_common.sh@10 -- # set +x 00:28:37.993 15:52:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.993 15:52:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:37.993 15:52:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.993 15:52:17 -- common/autotest_common.sh@10 -- # set +x 00:28:38.251 15:52:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:38.251 15:52:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:38.251 15:52:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:38.251 15:52:17 -- common/autotest_common.sh@10 -- # set +x 00:28:38.251 [2024-07-10 15:52:17.377647] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.251 15:52:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:38.251 15:52:17 -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:38.251 15:52:17 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:38.251 15:52:17 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:38.251 15:52:17 -- nvmf/common.sh@520 -- # config=() 00:28:38.251 15:52:17 -- nvmf/common.sh@520 -- # local subsystem config 00:28:38.251 15:52:17 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:38.251 15:52:17 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:38.251 { 00:28:38.251 "params": { 00:28:38.251 "name": "Nvme$subsystem", 00:28:38.251 "trtype": "$TEST_TRANSPORT", 00:28:38.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:38.251 "adrfam": "ipv4", 00:28:38.251 "trsvcid": "$NVMF_PORT", 00:28:38.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:38.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:38.251 "hdgst": ${hdgst:-false}, 00:28:38.251 "ddgst": ${ddgst:-false} 00:28:38.251 }, 00:28:38.251 "method": "bdev_nvme_attach_controller" 00:28:38.251 } 00:28:38.251 EOF 00:28:38.251 )") 00:28:38.251 15:52:17 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:38.251 15:52:17 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:38.251 15:52:17 -- target/dif.sh@82 -- # gen_fio_conf 00:28:38.251 15:52:17 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:28:38.251 15:52:17 -- target/dif.sh@54 -- # local file 00:28:38.251 15:52:17 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:38.251 15:52:17 -- target/dif.sh@56 -- # cat 00:28:38.251 15:52:17 -- common/autotest_common.sh@1318 -- # local sanitizers 00:28:38.251 15:52:17 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:38.251 15:52:17 -- common/autotest_common.sh@1320 -- # shift 00:28:38.251 15:52:17 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:28:38.251 15:52:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:38.251 15:52:17 -- nvmf/common.sh@542 -- # cat 00:28:38.251 15:52:17 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:38.251 15:52:17 -- target/dif.sh@72 -- # (( file = 1 )) 00:28:38.251 15:52:17 -- target/dif.sh@72 -- # (( file <= files )) 00:28:38.251 15:52:17 -- common/autotest_common.sh@1324 -- # grep libasan 00:28:38.251 15:52:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:38.251 15:52:17 -- nvmf/common.sh@544 -- # jq . 00:28:38.251 15:52:17 -- nvmf/common.sh@545 -- # IFS=, 00:28:38.251 15:52:17 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:38.251 "params": { 00:28:38.251 "name": "Nvme0", 00:28:38.251 "trtype": "tcp", 00:28:38.251 "traddr": "10.0.0.2", 00:28:38.251 "adrfam": "ipv4", 00:28:38.251 "trsvcid": "4420", 00:28:38.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:38.251 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:38.251 "hdgst": true, 00:28:38.251 "ddgst": true 00:28:38.251 }, 00:28:38.251 "method": "bdev_nvme_attach_controller" 00:28:38.251 }' 00:28:38.251 15:52:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:38.251 15:52:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:38.251 15:52:17 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:28:38.251 15:52:17 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:38.251 15:52:17 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:28:38.251 15:52:17 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:28:38.251 15:52:17 -- common/autotest_common.sh@1324 -- # asan_lib= 00:28:38.251 15:52:17 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:28:38.251 15:52:17 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:38.251 15:52:17 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:38.251 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:38.251 ... 00:28:38.251 fio-3.35 00:28:38.251 Starting 3 threads 00:28:38.510 EAL: No free 2048 kB hugepages reported on node 1 00:28:39.076 [2024-07-10 15:52:18.145225] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:28:39.076 [2024-07-10 15:52:18.145304] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:28:49.057 00:28:49.057 filename0: (groupid=0, jobs=1): err= 0: pid=2250770: Wed Jul 10 15:52:28 2024 00:28:49.057 read: IOPS=196, BW=24.5MiB/s (25.7MB/s)(246MiB/10048msec) 00:28:49.057 slat (nsec): min=7022, max=50063, avg=13033.91, stdev=2994.13 00:28:49.057 clat (usec): min=8635, max=58331, avg=15255.83, stdev=3286.80 00:28:49.057 lat (usec): min=8647, max=58343, avg=15268.87, stdev=3286.77 00:28:49.057 clat percentiles (usec): 00:28:49.057 | 1.00th=[10290], 5.00th=[12780], 10.00th=[13566], 20.00th=[14222], 00:28:49.057 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15139], 60.00th=[15401], 00:28:49.057 | 70.00th=[15664], 80.00th=[16057], 90.00th=[16581], 95.00th=[17171], 00:28:49.057 | 99.00th=[18482], 99.50th=[50594], 99.90th=[57410], 99.95th=[58459], 00:28:49.057 | 99.99th=[58459] 00:28:49.057 bw ( KiB/s): min=22528, max=26880, per=33.30%, avg=25190.40, stdev=1121.74, samples=20 00:28:49.057 iops : min= 176, max= 210, avg=196.80, stdev= 8.76, samples=20 00:28:49.057 lat (msec) : 10=0.61%, 20=98.83%, 50=0.05%, 100=0.51% 00:28:49.057 cpu : usr=90.40%, sys=9.14%, ctx=22, majf=0, minf=201 00:28:49.057 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.057 issued rwts: total=1971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.057 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:49.057 filename0: (groupid=0, jobs=1): err= 0: pid=2250771: Wed Jul 10 15:52:28 2024 00:28:49.057 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(247MiB/10046msec) 00:28:49.057 slat (nsec): min=5754, max=40156, avg=13005.84, stdev=3150.30 00:28:49.057 clat (usec): min=8853, max=58430, avg=15221.66, stdev=3971.25 00:28:49.057 lat (usec): min=8864, max=58443, avg=15234.66, stdev=3971.33 00:28:49.057 clat percentiles (usec): 00:28:49.057 | 1.00th=[10814], 5.00th=[12911], 10.00th=[13435], 20.00th=[13960], 00:28:49.057 | 30.00th=[14353], 40.00th=[14615], 50.00th=[15008], 60.00th=[15270], 00:28:49.057 | 70.00th=[15533], 80.00th=[15926], 90.00th=[16450], 95.00th=[16909], 00:28:49.057 | 99.00th=[18744], 99.50th=[55837], 99.90th=[57934], 99.95th=[58459], 00:28:49.057 | 99.99th=[58459] 00:28:49.057 bw ( KiB/s): min=23296, max=27136, per=33.38%, avg=25256.90, stdev=1051.15, samples=20 00:28:49.057 iops : min= 182, max= 212, avg=197.30, stdev= 8.21, samples=20 00:28:49.057 lat (msec) : 10=0.15%, 20=98.99%, 100=0.86% 00:28:49.057 cpu : usr=90.20%, sys=9.35%, ctx=28, majf=0, minf=86 00:28:49.057 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.057 issued rwts: total=1975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.057 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:49.057 filename0: (groupid=0, jobs=1): err= 0: pid=2250772: Wed Jul 10 15:52:28 2024 00:28:49.057 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(249MiB/10050msec) 00:28:49.057 slat (nsec): min=7392, max=79443, avg=13159.93, stdev=3307.84 00:28:49.057 clat (usec): min=8855, max=58959, avg=15060.74, stdev=2363.37 00:28:49.057 lat (usec): min=8867, max=58971, avg=15073.90, stdev=2363.41 00:28:49.057 clat percentiles (usec): 00:28:49.057 | 1.00th=[10159], 5.00th=[12387], 10.00th=[13435], 20.00th=[14091], 00:28:49.057 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15139], 60.00th=[15401], 00:28:49.057 | 70.00th=[15795], 80.00th=[16057], 90.00th=[16581], 95.00th=[17171], 00:28:49.057 | 99.00th=[17957], 99.50th=[18220], 99.90th=[57410], 99.95th=[58983], 00:28:49.057 | 99.99th=[58983] 00:28:49.057 bw ( KiB/s): min=22272, max=27136, per=33.69%, avg=25484.80, stdev=1170.12, samples=20 00:28:49.057 iops : min= 174, max= 212, avg=199.10, stdev= 9.14, samples=20 00:28:49.057 lat (msec) : 10=0.75%, 20=99.05%, 100=0.20% 00:28:49.057 cpu : usr=90.76%, sys=8.79%, ctx=19, majf=0, minf=162 00:28:49.057 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:49.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:49.057 issued rwts: total=1994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:49.057 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:49.057 00:28:49.057 Run status group 0 (all jobs): 00:28:49.057 READ: bw=73.9MiB/s (77.5MB/s), 24.5MiB/s-24.8MiB/s (25.7MB/s-26.0MB/s), io=743MiB (779MB), run=10046-10050msec 00:28:49.315 15:52:28 -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:49.315 15:52:28 -- target/dif.sh@43 -- # local sub 00:28:49.315 15:52:28 -- target/dif.sh@45 -- # for sub in "$@" 00:28:49.315 15:52:28 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:49.315 15:52:28 -- target/dif.sh@36 -- # local sub_id=0 00:28:49.315 15:52:28 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:49.315 15:52:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:49.315 15:52:28 -- common/autotest_common.sh@10 -- # set +x 00:28:49.315 15:52:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:49.315 15:52:28 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:49.315 15:52:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:49.315 15:52:28 -- common/autotest_common.sh@10 -- # set +x 00:28:49.315 15:52:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:49.315 00:28:49.315 real 0m11.234s 00:28:49.315 user 0m28.364s 00:28:49.315 sys 0m3.024s 00:28:49.315 15:52:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:49.315 15:52:28 -- common/autotest_common.sh@10 -- # set +x 00:28:49.315 ************************************ 00:28:49.315 END TEST fio_dif_digest 00:28:49.315 ************************************ 00:28:49.315 15:52:28 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:49.315 15:52:28 -- target/dif.sh@147 -- # nvmftestfini 00:28:49.315 15:52:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:49.315 15:52:28 -- nvmf/common.sh@116 -- # sync 00:28:49.315 15:52:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:49.315 15:52:28 -- nvmf/common.sh@119 -- # set +e 00:28:49.315 15:52:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:49.315 15:52:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:49.315 rmmod nvme_tcp 00:28:49.315 rmmod nvme_fabrics 00:28:49.315 rmmod nvme_keyring 00:28:49.315 15:52:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:49.315 15:52:28 -- nvmf/common.sh@123 -- # set -e 00:28:49.315 15:52:28 -- nvmf/common.sh@124 -- # return 0 00:28:49.315 15:52:28 -- nvmf/common.sh@477 -- # '[' -n 2244546 ']' 00:28:49.315 15:52:28 -- nvmf/common.sh@478 -- # killprocess 2244546 00:28:49.315 15:52:28 -- common/autotest_common.sh@926 -- # '[' -z 2244546 ']' 00:28:49.315 15:52:28 -- common/autotest_common.sh@930 -- # kill -0 2244546 00:28:49.315 15:52:28 -- common/autotest_common.sh@931 -- # uname 00:28:49.315 15:52:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:49.315 15:52:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2244546 00:28:49.572 15:52:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:49.572 15:52:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:49.572 15:52:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2244546' 00:28:49.572 killing process with pid 2244546 00:28:49.572 15:52:28 -- common/autotest_common.sh@945 -- # kill 2244546 00:28:49.572 15:52:28 -- common/autotest_common.sh@950 -- # wait 2244546 00:28:49.830 15:52:28 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:28:49.830 15:52:28 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:50.762 Waiting for block devices as requested 00:28:50.762 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:28:51.020 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:51.020 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:51.020 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:51.278 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:51.278 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:51.278 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:51.278 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:51.278 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:51.536 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:28:51.536 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:28:51.536 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:28:51.793 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:28:51.793 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:28:51.793 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:28:51.793 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:28:52.051 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:28:52.051 15:52:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:52.051 15:52:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:52.051 15:52:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:52.051 15:52:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:52.051 15:52:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:52.051 15:52:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:52.051 15:52:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.582 15:52:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:54.582 00:28:54.582 real 1m7.110s 00:28:54.582 user 6m24.356s 00:28:54.582 sys 0m19.915s 00:28:54.582 15:52:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:54.582 15:52:33 -- common/autotest_common.sh@10 -- # set +x 00:28:54.582 ************************************ 00:28:54.582 END TEST nvmf_dif 00:28:54.582 ************************************ 00:28:54.582 15:52:33 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:54.582 15:52:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:54.582 15:52:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:54.582 15:52:33 -- common/autotest_common.sh@10 -- # set +x 00:28:54.582 ************************************ 00:28:54.582 START TEST nvmf_abort_qd_sizes 00:28:54.582 ************************************ 00:28:54.582 15:52:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:54.582 * Looking for test storage... 00:28:54.582 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:54.582 15:52:33 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:54.582 15:52:33 -- nvmf/common.sh@7 -- # uname -s 00:28:54.582 15:52:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:54.582 15:52:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:54.582 15:52:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:54.582 15:52:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:54.582 15:52:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:54.582 15:52:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:54.582 15:52:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:54.582 15:52:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:54.582 15:52:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:54.582 15:52:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:54.582 15:52:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:54.582 15:52:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:54.582 15:52:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:54.582 15:52:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:54.582 15:52:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:54.582 15:52:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:54.582 15:52:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:54.582 15:52:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:54.582 15:52:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:54.582 15:52:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.582 15:52:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.582 15:52:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.582 15:52:33 -- paths/export.sh@5 -- # export PATH 00:28:54.582 15:52:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:54.582 15:52:33 -- nvmf/common.sh@46 -- # : 0 00:28:54.582 15:52:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:54.582 15:52:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:54.582 15:52:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:54.582 15:52:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:54.582 15:52:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:54.582 15:52:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:54.582 15:52:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:54.582 15:52:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:54.582 15:52:33 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:28:54.582 15:52:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:54.582 15:52:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:54.582 15:52:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:54.582 15:52:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:54.582 15:52:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:54.582 15:52:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:54.582 15:52:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:54.582 15:52:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:54.582 15:52:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:54.582 15:52:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:54.582 15:52:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:54.582 15:52:33 -- common/autotest_common.sh@10 -- # set +x 00:28:55.958 15:52:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:55.958 15:52:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:55.958 15:52:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:55.958 15:52:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:55.958 15:52:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:55.958 15:52:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:55.958 15:52:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:55.958 15:52:35 -- nvmf/common.sh@294 -- # net_devs=() 00:28:55.958 15:52:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:55.958 15:52:35 -- nvmf/common.sh@295 -- # e810=() 00:28:55.958 15:52:35 -- nvmf/common.sh@295 -- # local -ga e810 00:28:55.958 15:52:35 -- nvmf/common.sh@296 -- # x722=() 00:28:55.958 15:52:35 -- nvmf/common.sh@296 -- # local -ga x722 00:28:55.958 15:52:35 -- nvmf/common.sh@297 -- # mlx=() 00:28:55.958 15:52:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:55.958 15:52:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:55.958 15:52:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:55.958 15:52:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:55.958 15:52:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:55.958 15:52:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:55.958 15:52:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:55.958 15:52:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:55.958 15:52:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:55.959 15:52:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:55.959 15:52:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:55.959 15:52:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:55.959 15:52:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:55.959 15:52:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:55.959 15:52:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:55.959 15:52:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:55.959 15:52:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:55.959 15:52:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:55.959 15:52:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:55.959 15:52:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:55.959 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:55.959 15:52:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:55.959 15:52:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:55.959 15:52:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.959 15:52:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.959 15:52:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:55.959 15:52:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:55.959 15:52:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:55.959 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:55.959 15:52:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:55.959 15:52:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:55.959 15:52:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:55.959 15:52:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:55.959 15:52:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:55.959 15:52:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:55.959 15:52:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:55.959 15:52:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:55.959 15:52:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:55.959 15:52:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.959 15:52:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:55.959 15:52:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.959 15:52:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:55.959 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:55.959 15:52:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.959 15:52:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:55.959 15:52:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:55.959 15:52:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:55.959 15:52:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:55.959 15:52:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:55.959 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:55.959 15:52:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:55.959 15:52:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:55.959 15:52:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:55.959 15:52:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:55.959 15:52:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:55.959 15:52:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:55.959 15:52:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:55.959 15:52:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:55.959 15:52:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:55.959 15:52:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:55.959 15:52:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:55.959 15:52:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:55.959 15:52:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:55.959 15:52:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:55.959 15:52:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:55.959 15:52:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:56.217 15:52:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:56.218 15:52:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:56.218 15:52:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:56.218 15:52:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:56.218 15:52:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:56.218 15:52:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:56.218 15:52:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:56.218 15:52:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:56.218 15:52:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:56.218 15:52:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:56.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:28:56.218 00:28:56.218 --- 10.0.0.2 ping statistics --- 00:28:56.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.218 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:28:56.218 15:52:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:56.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:28:56.218 00:28:56.218 --- 10.0.0.1 ping statistics --- 00:28:56.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.218 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:28:56.218 15:52:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.218 15:52:35 -- nvmf/common.sh@410 -- # return 0 00:28:56.218 15:52:35 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:28:56.218 15:52:35 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:57.593 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:57.593 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:57.593 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:57.593 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:57.593 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:57.594 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:57.594 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:57.594 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:57.594 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:57.594 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:57.594 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:57.594 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:57.594 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:57.594 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:57.594 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:57.594 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:58.531 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:28:58.531 15:52:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.531 15:52:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:58.531 15:52:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:58.531 15:52:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.531 15:52:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:58.531 15:52:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:58.531 15:52:37 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:28:58.531 15:52:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:58.531 15:52:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:58.531 15:52:37 -- common/autotest_common.sh@10 -- # set +x 00:28:58.531 15:52:37 -- nvmf/common.sh@469 -- # nvmfpid=2255782 00:28:58.531 15:52:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:58.531 15:52:37 -- nvmf/common.sh@470 -- # waitforlisten 2255782 00:28:58.531 15:52:37 -- common/autotest_common.sh@819 -- # '[' -z 2255782 ']' 00:28:58.531 15:52:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.531 15:52:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:58.531 15:52:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.531 15:52:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:58.531 15:52:37 -- common/autotest_common.sh@10 -- # set +x 00:28:58.789 [2024-07-10 15:52:37.910324] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:28:58.789 [2024-07-10 15:52:37.910412] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.789 EAL: No free 2048 kB hugepages reported on node 1 00:28:58.789 [2024-07-10 15:52:37.983045] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:58.789 [2024-07-10 15:52:38.099068] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:58.789 [2024-07-10 15:52:38.099205] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.789 [2024-07-10 15:52:38.099222] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.789 [2024-07-10 15:52:38.099233] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.789 [2024-07-10 15:52:38.099289] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.789 [2024-07-10 15:52:38.099313] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:58.789 [2024-07-10 15:52:38.099370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:58.789 [2024-07-10 15:52:38.099372] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:59.722 15:52:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:59.722 15:52:38 -- common/autotest_common.sh@852 -- # return 0 00:28:59.722 15:52:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:59.722 15:52:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:59.722 15:52:38 -- common/autotest_common.sh@10 -- # set +x 00:28:59.722 15:52:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:59.722 15:52:38 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:59.722 15:52:38 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:28:59.722 15:52:38 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:28:59.722 15:52:38 -- scripts/common.sh@311 -- # local bdf bdfs 00:28:59.722 15:52:38 -- scripts/common.sh@312 -- # local nvmes 00:28:59.722 15:52:38 -- scripts/common.sh@314 -- # [[ -n 0000:88:00.0 ]] 00:28:59.722 15:52:38 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:59.722 15:52:38 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:28:59.722 15:52:38 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:28:59.722 15:52:38 -- scripts/common.sh@322 -- # uname -s 00:28:59.722 15:52:38 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:28:59.722 15:52:38 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:28:59.722 15:52:38 -- scripts/common.sh@327 -- # (( 1 )) 00:28:59.722 15:52:38 -- scripts/common.sh@328 -- # printf '%s\n' 0000:88:00.0 00:28:59.722 15:52:38 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:28:59.722 15:52:38 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:88:00.0 00:28:59.722 15:52:38 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:28:59.722 15:52:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:28:59.722 15:52:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:59.722 15:52:38 -- common/autotest_common.sh@10 -- # set +x 00:28:59.723 ************************************ 00:28:59.723 START TEST spdk_target_abort 00:28:59.723 ************************************ 00:28:59.723 15:52:38 -- common/autotest_common.sh@1104 -- # spdk_target 00:28:59.723 15:52:38 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:59.723 15:52:38 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:28:59.723 15:52:38 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:28:59.723 15:52:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:59.723 15:52:38 -- common/autotest_common.sh@10 -- # set +x 00:29:03.000 spdk_targetn1 00:29:03.000 15:52:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:03.000 15:52:41 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:03.001 15:52:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:03.001 15:52:41 -- common/autotest_common.sh@10 -- # set +x 00:29:03.001 [2024-07-10 15:52:41.700198] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:03.001 15:52:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:29:03.001 15:52:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:03.001 15:52:41 -- common/autotest_common.sh@10 -- # set +x 00:29:03.001 15:52:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:29:03.001 15:52:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:03.001 15:52:41 -- common/autotest_common.sh@10 -- # set +x 00:29:03.001 15:52:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:29:03.001 15:52:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:03.001 15:52:41 -- common/autotest_common.sh@10 -- # set +x 00:29:03.001 [2024-07-10 15:52:41.732496] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:03.001 15:52:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:03.001 15:52:41 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:29:03.001 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.561 Initializing NVMe Controllers 00:29:05.561 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:29:05.561 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:29:05.561 Initialization complete. Launching workers. 00:29:05.561 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10581, failed: 0 00:29:05.561 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1327, failed to submit 9254 00:29:05.561 success 754, unsuccess 573, failed 0 00:29:05.561 15:52:44 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:05.561 15:52:44 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:29:05.561 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.837 [2024-07-10 15:52:48.032489] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bc270 is same with the state(5) to be set 00:29:08.837 [2024-07-10 15:52:48.032538] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bc270 is same with the state(5) to be set 00:29:08.837 [2024-07-10 15:52:48.032563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bc270 is same with the state(5) to be set 00:29:08.837 [2024-07-10 15:52:48.032598] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bc270 is same with the state(5) to be set 00:29:08.837 [2024-07-10 15:52:48.032611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bc270 is same with the state(5) to be set 00:29:08.837 [2024-07-10 15:52:48.032623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14bc270 is same with the state(5) to be set 00:29:08.837 Initializing NVMe Controllers 00:29:08.837 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:29:08.837 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:29:08.837 Initialization complete. Launching workers. 00:29:08.837 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8629, failed: 0 00:29:08.837 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1229, failed to submit 7400 00:29:08.837 success 350, unsuccess 879, failed 0 00:29:08.837 15:52:48 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:08.837 15:52:48 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:29:08.837 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.108 Initializing NVMe Controllers 00:29:12.108 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:29:12.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:29:12.108 Initialization complete. Launching workers. 00:29:12.108 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 32532, failed: 0 00:29:12.108 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2732, failed to submit 29800 00:29:12.108 success 572, unsuccess 2160, failed 0 00:29:12.108 15:52:51 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:29:12.108 15:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:12.108 15:52:51 -- common/autotest_common.sh@10 -- # set +x 00:29:12.108 15:52:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:12.108 15:52:51 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:12.108 15:52:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:12.108 15:52:51 -- common/autotest_common.sh@10 -- # set +x 00:29:13.481 15:52:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:13.481 15:52:52 -- target/abort_qd_sizes.sh@62 -- # killprocess 2255782 00:29:13.481 15:52:52 -- common/autotest_common.sh@926 -- # '[' -z 2255782 ']' 00:29:13.481 15:52:52 -- common/autotest_common.sh@930 -- # kill -0 2255782 00:29:13.481 15:52:52 -- common/autotest_common.sh@931 -- # uname 00:29:13.481 15:52:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:13.481 15:52:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2255782 00:29:13.481 15:52:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:13.481 15:52:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:13.481 15:52:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2255782' 00:29:13.481 killing process with pid 2255782 00:29:13.481 15:52:52 -- common/autotest_common.sh@945 -- # kill 2255782 00:29:13.481 15:52:52 -- common/autotest_common.sh@950 -- # wait 2255782 00:29:13.745 00:29:13.745 real 0m14.169s 00:29:13.745 user 0m55.962s 00:29:13.745 sys 0m2.559s 00:29:13.745 15:52:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:13.745 15:52:53 -- common/autotest_common.sh@10 -- # set +x 00:29:13.745 ************************************ 00:29:13.745 END TEST spdk_target_abort 00:29:13.745 ************************************ 00:29:13.745 15:52:53 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:29:13.745 15:52:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:13.745 15:52:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:13.745 15:52:53 -- common/autotest_common.sh@10 -- # set +x 00:29:13.745 ************************************ 00:29:13.745 START TEST kernel_target_abort 00:29:13.745 ************************************ 00:29:13.745 15:52:53 -- common/autotest_common.sh@1104 -- # kernel_target 00:29:13.745 15:52:53 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:29:13.745 15:52:53 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:29:13.745 15:52:53 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:29:13.745 15:52:53 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:29:13.745 15:52:53 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:29:13.745 15:52:53 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:29:13.745 15:52:53 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:13.745 15:52:53 -- nvmf/common.sh@627 -- # local block nvme 00:29:13.745 15:52:53 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:29:13.745 15:52:53 -- nvmf/common.sh@630 -- # modprobe nvmet 00:29:13.745 15:52:53 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:13.745 15:52:53 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:15.123 Waiting for block devices as requested 00:29:15.123 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:29:15.123 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:15.123 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:15.123 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:15.381 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:15.381 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:15.381 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:15.381 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:15.640 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:15.640 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:29:15.640 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:29:15.640 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:29:15.640 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:29:15.898 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:29:15.898 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:29:15.898 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:29:15.898 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:29:16.157 15:52:55 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:29:16.157 15:52:55 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:16.157 15:52:55 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:29:16.157 15:52:55 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:29:16.157 15:52:55 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:16.157 No valid GPT data, bailing 00:29:16.157 15:52:55 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:16.157 15:52:55 -- scripts/common.sh@393 -- # pt= 00:29:16.157 15:52:55 -- scripts/common.sh@394 -- # return 1 00:29:16.157 15:52:55 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:29:16.157 15:52:55 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:29:16.157 15:52:55 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:29:16.157 15:52:55 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:29:16.157 15:52:55 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:16.157 15:52:55 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:29:16.157 15:52:55 -- nvmf/common.sh@654 -- # echo 1 00:29:16.157 15:52:55 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:29:16.157 15:52:55 -- nvmf/common.sh@656 -- # echo 1 00:29:16.157 15:52:55 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:29:16.157 15:52:55 -- nvmf/common.sh@663 -- # echo tcp 00:29:16.157 15:52:55 -- nvmf/common.sh@664 -- # echo 4420 00:29:16.157 15:52:55 -- nvmf/common.sh@665 -- # echo ipv4 00:29:16.157 15:52:55 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:16.157 15:52:55 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:29:16.157 00:29:16.157 Discovery Log Number of Records 2, Generation counter 2 00:29:16.157 =====Discovery Log Entry 0====== 00:29:16.157 trtype: tcp 00:29:16.157 adrfam: ipv4 00:29:16.157 subtype: current discovery subsystem 00:29:16.157 treq: not specified, sq flow control disable supported 00:29:16.157 portid: 1 00:29:16.157 trsvcid: 4420 00:29:16.157 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:16.157 traddr: 10.0.0.1 00:29:16.157 eflags: none 00:29:16.157 sectype: none 00:29:16.157 =====Discovery Log Entry 1====== 00:29:16.157 trtype: tcp 00:29:16.157 adrfam: ipv4 00:29:16.157 subtype: nvme subsystem 00:29:16.157 treq: not specified, sq flow control disable supported 00:29:16.157 portid: 1 00:29:16.157 trsvcid: 4420 00:29:16.157 subnqn: kernel_target 00:29:16.157 traddr: 10.0.0.1 00:29:16.157 eflags: none 00:29:16.157 sectype: none 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:16.157 15:52:55 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:16.415 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.697 Initializing NVMe Controllers 00:29:19.697 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:29:19.697 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:29:19.697 Initialization complete. Launching workers. 00:29:19.697 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 30052, failed: 0 00:29:19.697 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 30052, failed to submit 0 00:29:19.697 success 0, unsuccess 30052, failed 0 00:29:19.697 15:52:58 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:19.697 15:52:58 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:19.697 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.973 Initializing NVMe Controllers 00:29:22.973 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:29:22.973 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:29:22.973 Initialization complete. Launching workers. 00:29:22.973 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 60895, failed: 0 00:29:22.973 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 15358, failed to submit 45537 00:29:22.973 success 0, unsuccess 15358, failed 0 00:29:22.973 15:53:01 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:22.973 15:53:01 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:29:22.973 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.498 Initializing NVMe Controllers 00:29:25.498 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:29:25.498 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:29:25.498 Initialization complete. Launching workers. 00:29:25.498 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 59463, failed: 0 00:29:25.498 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 14842, failed to submit 44621 00:29:25.498 success 0, unsuccess 14842, failed 0 00:29:25.498 15:53:04 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:29:25.498 15:53:04 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:29:25.498 15:53:04 -- nvmf/common.sh@677 -- # echo 0 00:29:25.498 15:53:04 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:29:25.755 15:53:04 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:29:25.755 15:53:04 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:25.755 15:53:04 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:29:25.755 15:53:04 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:29:25.755 15:53:04 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:29:25.755 00:29:25.755 real 0m11.869s 00:29:25.755 user 0m4.344s 00:29:25.755 sys 0m2.498s 00:29:25.755 15:53:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.755 15:53:04 -- common/autotest_common.sh@10 -- # set +x 00:29:25.755 ************************************ 00:29:25.755 END TEST kernel_target_abort 00:29:25.755 ************************************ 00:29:25.755 15:53:04 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:29:25.755 15:53:04 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:29:25.755 15:53:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:25.755 15:53:04 -- nvmf/common.sh@116 -- # sync 00:29:25.755 15:53:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:25.755 15:53:04 -- nvmf/common.sh@119 -- # set +e 00:29:25.755 15:53:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:25.755 15:53:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:25.755 rmmod nvme_tcp 00:29:25.755 rmmod nvme_fabrics 00:29:25.755 rmmod nvme_keyring 00:29:25.755 15:53:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:25.755 15:53:05 -- nvmf/common.sh@123 -- # set -e 00:29:25.755 15:53:05 -- nvmf/common.sh@124 -- # return 0 00:29:25.755 15:53:05 -- nvmf/common.sh@477 -- # '[' -n 2255782 ']' 00:29:25.755 15:53:05 -- nvmf/common.sh@478 -- # killprocess 2255782 00:29:25.755 15:53:05 -- common/autotest_common.sh@926 -- # '[' -z 2255782 ']' 00:29:25.755 15:53:05 -- common/autotest_common.sh@930 -- # kill -0 2255782 00:29:25.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2255782) - No such process 00:29:25.755 15:53:05 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2255782 is not found' 00:29:25.755 Process with pid 2255782 is not found 00:29:25.755 15:53:05 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:29:25.755 15:53:05 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:27.130 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:29:27.130 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:29:27.130 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:29:27.130 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:29:27.131 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:29:27.131 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:29:27.131 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:29:27.131 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:29:27.131 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:29:27.131 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:29:27.131 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:29:27.131 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:29:27.131 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:29:27.131 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:29:27.131 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:29:27.131 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:29:27.131 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:29:27.131 15:53:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:27.131 15:53:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:27.131 15:53:06 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:27.131 15:53:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:27.131 15:53:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:27.131 15:53:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:27.131 15:53:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:29.661 15:53:08 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:29.661 00:29:29.661 real 0m35.096s 00:29:29.661 user 1m2.622s 00:29:29.661 sys 0m8.433s 00:29:29.661 15:53:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:29.661 15:53:08 -- common/autotest_common.sh@10 -- # set +x 00:29:29.661 ************************************ 00:29:29.661 END TEST nvmf_abort_qd_sizes 00:29:29.661 ************************************ 00:29:29.661 15:53:08 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:29.662 15:53:08 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:29.662 15:53:08 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:29.662 15:53:08 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:29:29.662 15:53:08 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:29.662 15:53:08 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:29.662 15:53:08 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:29.662 15:53:08 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:29.662 15:53:08 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:29.662 15:53:08 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:29.662 15:53:08 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:29:29.662 15:53:08 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:29.662 15:53:08 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:29.662 15:53:08 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:29.662 15:53:08 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:29:29.662 15:53:08 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:29:29.662 15:53:08 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:29:29.662 15:53:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:29.662 15:53:08 -- common/autotest_common.sh@10 -- # set +x 00:29:29.662 15:53:08 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:29:29.662 15:53:08 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:29:29.662 15:53:08 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:29:29.662 15:53:08 -- common/autotest_common.sh@10 -- # set +x 00:29:31.037 INFO: APP EXITING 00:29:31.037 INFO: killing all VMs 00:29:31.037 INFO: killing vhost app 00:29:31.037 INFO: EXIT DONE 00:29:32.414 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:29:32.414 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:29:32.414 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:29:32.414 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:29:32.415 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:29:32.415 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:29:32.415 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:29:32.415 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:29:32.415 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:29:32.415 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:29:32.415 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:29:32.415 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:29:32.415 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:29:32.415 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:29:32.415 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:29:32.415 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:29:32.415 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:29:33.795 Cleaning 00:29:33.795 Removing: /var/run/dpdk/spdk0/config 00:29:33.795 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:33.795 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:33.795 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:33.795 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:33.795 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:33.795 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:33.795 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:33.795 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:33.795 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:33.795 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:33.795 Removing: /var/run/dpdk/spdk1/config 00:29:33.795 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:33.795 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:33.795 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:33.795 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:33.795 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:33.795 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:33.795 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:33.795 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:33.795 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:33.795 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:33.795 Removing: /var/run/dpdk/spdk1/mp_socket 00:29:33.795 Removing: /var/run/dpdk/spdk2/config 00:29:33.795 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:33.795 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:33.795 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:33.795 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:33.795 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:33.795 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:33.795 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:33.795 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:33.795 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:33.795 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:33.795 Removing: /var/run/dpdk/spdk3/config 00:29:33.795 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:33.795 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:33.795 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:33.795 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:33.795 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:33.795 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:33.795 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:33.795 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:33.795 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:33.795 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:33.795 Removing: /var/run/dpdk/spdk4/config 00:29:33.795 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:33.795 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:33.795 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:33.795 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:33.795 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:33.795 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:33.795 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:33.795 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:33.795 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:33.795 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:33.795 Removing: /dev/shm/bdev_svc_trace.1 00:29:33.795 Removing: /dev/shm/nvmf_trace.0 00:29:33.795 Removing: /dev/shm/spdk_tgt_trace.pid1990973 00:29:33.795 Removing: /var/run/dpdk/spdk0 00:29:33.795 Removing: /var/run/dpdk/spdk1 00:29:33.795 Removing: /var/run/dpdk/spdk2 00:29:33.796 Removing: /var/run/dpdk/spdk3 00:29:33.796 Removing: /var/run/dpdk/spdk4 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1989274 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1990023 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1990973 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1991455 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1992678 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1993613 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1993924 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1994125 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1994460 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1994659 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1994829 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1995097 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1995278 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1995750 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1998288 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1998466 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1998760 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1998900 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1999217 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1999359 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1999736 00:29:33.796 Removing: /var/run/dpdk/spdk_pid1999805 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2000106 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2000247 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2000411 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2000551 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2000928 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2001082 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2001282 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2001583 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2001603 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2001784 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2001931 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2002090 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2002349 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2002518 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2002658 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2002936 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2003084 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2003245 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2003464 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2003666 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2003813 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2004031 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2004231 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2004400 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2004540 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2004818 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2004967 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2005127 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2005386 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2005545 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2005695 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2005967 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2006113 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2006280 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2006545 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2006698 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2006848 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2007126 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2007272 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2007427 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2007664 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2007853 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2008002 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2008286 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2008436 00:29:33.796 Removing: /var/run/dpdk/spdk_pid2008592 00:29:33.797 Removing: /var/run/dpdk/spdk_pid2008862 00:29:33.797 Removing: /var/run/dpdk/spdk_pid2009034 00:29:33.797 Removing: /var/run/dpdk/spdk_pid2009180 00:29:33.797 Removing: /var/run/dpdk/spdk_pid2009458 00:29:33.797 Removing: /var/run/dpdk/spdk_pid2009527 00:29:33.797 Removing: /var/run/dpdk/spdk_pid2009732 00:29:33.797 Removing: /var/run/dpdk/spdk_pid2011933 00:29:33.797 Removing: /var/run/dpdk/spdk_pid2067412 00:29:33.797 Removing: /var/run/dpdk/spdk_pid2069935 00:29:33.797 Removing: /var/run/dpdk/spdk_pid2077661 00:29:33.797 Removing: /var/run/dpdk/spdk_pid2081012 00:29:33.797 Removing: /var/run/dpdk/spdk_pid2083521 00:29:33.797 Removing: /var/run/dpdk/spdk_pid2083941 00:29:33.797 Removing: /var/run/dpdk/spdk_pid2089022 00:29:33.797 Removing: /var/run/dpdk/spdk_pid2089307 00:29:33.797 Removing: /var/run/dpdk/spdk_pid2091974 00:29:33.797 Removing: /var/run/dpdk/spdk_pid2095786 00:29:33.797 Removing: /var/run/dpdk/spdk_pid2097986 00:29:33.797 Removing: /var/run/dpdk/spdk_pid2104612 00:29:33.797 Removing: /var/run/dpdk/spdk_pid2110012 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2111481 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2112669 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2123253 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2125546 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2128506 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2129725 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2131097 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2131251 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2131519 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2131676 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2132266 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2133639 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2134652 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2135100 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2138605 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2142175 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2146435 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2170114 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2173031 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2177409 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2178396 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2179639 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2182229 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2184743 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2189119 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2189122 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2192025 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2192209 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2192347 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2192618 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2192629 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2193728 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2194951 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2196166 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2197381 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2198670 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2199942 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2203828 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2204165 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2205624 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2206470 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2210767 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2212822 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2216433 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2220065 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2223752 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2224278 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2224714 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2225136 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2225728 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2226279 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2226844 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2227392 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2229997 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2230204 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2234068 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2234248 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2235891 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2241658 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2241663 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2244727 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2246120 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2247596 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2248367 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2249811 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2250683 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2256215 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2256617 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2257027 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2258500 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2258913 00:29:34.057 Removing: /var/run/dpdk/spdk_pid2259321 00:29:34.057 Clean 00:29:34.057 killing process with pid 1961101 00:29:42.167 killing process with pid 1961098 00:29:42.167 killing process with pid 1961100 00:29:42.167 killing process with pid 1961099 00:29:42.167 15:53:21 -- common/autotest_common.sh@1436 -- # return 0 00:29:42.167 15:53:21 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:29:42.167 15:53:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:42.167 15:53:21 -- common/autotest_common.sh@10 -- # set +x 00:29:42.167 15:53:21 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:29:42.167 15:53:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:42.167 15:53:21 -- common/autotest_common.sh@10 -- # set +x 00:29:42.167 15:53:21 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:29:42.167 15:53:21 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:29:42.167 15:53:21 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:29:42.167 15:53:21 -- spdk/autotest.sh@394 -- # hash lcov 00:29:42.167 15:53:21 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:42.167 15:53:21 -- spdk/autotest.sh@396 -- # hostname 00:29:42.167 15:53:21 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:29:42.167 geninfo: WARNING: invalid characters removed from testname! 00:30:08.693 15:53:46 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:11.219 15:53:50 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:13.835 15:53:52 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:16.357 15:53:55 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:18.884 15:53:58 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:22.160 15:54:00 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:30:25.438 15:54:04 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:25.438 15:54:04 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:25.438 15:54:04 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:25.438 15:54:04 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:25.438 15:54:04 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:25.438 15:54:04 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.438 15:54:04 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.438 15:54:04 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.438 15:54:04 -- paths/export.sh@5 -- $ export PATH 00:30:25.438 15:54:04 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:25.438 15:54:04 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:30:25.438 15:54:04 -- common/autobuild_common.sh@435 -- $ date +%s 00:30:25.438 15:54:04 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720619644.XXXXXX 00:30:25.438 15:54:04 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720619644.vPatwH 00:30:25.438 15:54:04 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:30:25.438 15:54:04 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:30:25.438 15:54:04 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:30:25.438 15:54:04 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:25.438 15:54:04 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:25.438 15:54:04 -- common/autobuild_common.sh@451 -- $ get_config_params 00:30:25.438 15:54:04 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:30:25.438 15:54:04 -- common/autotest_common.sh@10 -- $ set +x 00:30:25.439 15:54:04 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk' 00:30:25.439 15:54:04 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:30:25.439 15:54:04 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:25.439 15:54:04 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:25.439 15:54:04 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:25.439 15:54:04 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:25.439 15:54:04 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:25.439 15:54:04 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:25.439 15:54:04 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:25.439 15:54:04 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:25.439 15:54:04 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:25.439 + [[ -n 1918794 ]] 00:30:25.439 + sudo kill 1918794 00:30:25.448 [Pipeline] } 00:30:25.467 [Pipeline] // stage 00:30:25.472 [Pipeline] } 00:30:25.494 [Pipeline] // timeout 00:30:25.500 [Pipeline] } 00:30:25.521 [Pipeline] // catchError 00:30:25.528 [Pipeline] } 00:30:25.547 [Pipeline] // wrap 00:30:25.554 [Pipeline] } 00:30:25.582 [Pipeline] // catchError 00:30:25.592 [Pipeline] stage 00:30:25.595 [Pipeline] { (Epilogue) 00:30:25.613 [Pipeline] catchError 00:30:25.615 [Pipeline] { 00:30:25.632 [Pipeline] echo 00:30:25.634 Cleanup processes 00:30:25.640 [Pipeline] sh 00:30:25.921 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:25.921 2271234 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:25.935 [Pipeline] sh 00:30:26.215 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:30:26.215 ++ grep -v 'sudo pgrep' 00:30:26.215 ++ awk '{print $1}' 00:30:26.215 + sudo kill -9 00:30:26.215 + true 00:30:26.226 [Pipeline] sh 00:30:26.505 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:36.476 [Pipeline] sh 00:30:36.755 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:36.755 Artifacts sizes are good 00:30:36.768 [Pipeline] archiveArtifacts 00:30:36.772 Archiving artifacts 00:30:36.977 [Pipeline] sh 00:30:37.267 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:30:37.279 [Pipeline] cleanWs 00:30:37.287 [WS-CLEANUP] Deleting project workspace... 00:30:37.287 [WS-CLEANUP] Deferred wipeout is used... 00:30:37.293 [WS-CLEANUP] done 00:30:37.296 [Pipeline] } 00:30:37.314 [Pipeline] // catchError 00:30:37.322 [Pipeline] sh 00:30:37.595 + logger -p user.info -t JENKINS-CI 00:30:37.602 [Pipeline] } 00:30:37.619 [Pipeline] // stage 00:30:37.626 [Pipeline] } 00:30:37.644 [Pipeline] // node 00:30:37.649 [Pipeline] End of Pipeline 00:30:37.696 Finished: SUCCESS